I really like the Rubin because I think a lot of people focus too much on "deep" seeing (IE, looking at individual or several objects with very high magnification only once). The Rubin does much more "wide" seeing and this actually produces a ton of useful data- basically, enough data to collect reliable statistics about things. This helps refine cosmological models in ways that smaller individual observations cannot.
What's amazing to me is just how long it took to get to first photo- I was working on the design of the LSST scope well over 10 years ago, and the project had been underway for some time before that. It's hard to keep attention on projects for that long when a company can IPO and make billions in just a few years.
My feeling is the "deep" vs "wide" thing is a circumstance of which groups you interact with (and also which facilities you have access to, and even to some extent the culture of your science community). Rubin is an example of what you can do when you build something massive specifically for a single purpose, and as more of these kind of facilities come online (SDSS and Gaia have been around for a while, but DESI, 4MOST and other similar facilities are coming, and let's not forget radio), it's what we get out of the whole suite supporting each other that gets the best science.
You worked on the design? That is interesting. I worked on the simulating the LSST , back in 2008 to 2010. The goal of which was to test the data reduction software. We were on the Image Simulation team.
It is surreal to see LSST/Rubin finally get first light.
Even more interesting to see who is still working on LSST, and who is not.
We also simulated the LSST- in this case, using Exacycle at google (an idle cycle harvester). We took a star catalog and passed it through a highly accurate ray tracer that simulated the light falling on the sensors (through space, atmosphere, etc). Apparently it found some bug in the design that was fixed before some expensive part was built (my coworkers were the subject matter expert, I mainly built the exacycle software and sat in on the meetings).
Deep is still interesting in understanding the origins of the universe. Rubin seems highly practical on the flip side. It'll be a super helpful tool in predicting asteroid impacts.
Also new planets! Planet Nine should likely be resolved within months, one way or another.
> "Probably within the first year we’re going to see if there’s something there or not,” says Pedro Bernardinelli, an astronomer at the University of Washington."
It is generally recommended to upvote a comment you appreciate rather than making a comment that isn't adding substance. It helps keep the signal rate higher.
The "wide" mode is called "survey" astronomy, and there have been several large surveys like Rubin/LSST, going all the way back to the Sloan Digital Sky Survey, which started in 2000 (if you count surveys from before the era of digital sensors, there are surveys going back more than 100 years).[0] Rubin/LSST is just the newest and most advanced large, ground-based optical survey.
Both modes of observation - surveys and targeted observations of individual objects - are necessary for astronomical research. Often, large surveys are used to scan the sky, and then targeted observations are used to follow up on the most interesting objects.
It’s like swimming in a lake or river and thinking the water is just water but then you take a closer look and it’s just incredibly alive to the point of absurdity.
If it has the potential to wipe out our entire species, but there's something we could do to prevent it (which I'm not sure about w/r/to asteroids), then it's worth looking out for the black swan event.
Doing some extremely rough math along these lines to double check myself:
* Gemini says that a dinosaur-extincting asteroid hits Earth about once every 100 million years. So in any given year that's 0.000001%.
* Economists say a human life is worth about 10 million dollars. There are about 8 billion people on Earth. So the total value of all human life is $80,000,000,000,000,000 (or 8e+16).
* So in any given year, the present value of asteroid protection is $800,000,000 (likelihood of an impact that year times value of the human life it would wipe out).
* The Guardian says the Vera Rubin telescope cost about $2,000,000,000 (2 billion).
By that measure, assuming the Rubin telescope prevents any dinosaur-extinction-level asteroid impacts, it will pay for itself in three years.
It seems incredibly bizarre to assign a monetary value to the elimination of all human life given the concept of monetary value would be wiped out along with the people.
The counterpoint is that not doing so (implying some sort of infinite monetary loss if the entire human species is wiped out) would mean you want to spend every single unit of monetary value of the entire global economy to preventing this (which is also obviously nonsense - people have to eat after all).
So you have to put the monetary value somewhere (although you're completely within your right to question this specific amount).
I think what I’m trying to express is that it feels like the answer isn’t any amount of money, it’s just undefined, like a division by zero or trying to read the value of a binary register on a machine that’s turned off. I think Pirsig called it a Mu answer.
That's the most interesting application of capitalism-as-as-resource-allocation mechanism I've ever seen, that's something I look forward to thinking about more.
My immediate reaction though is to doubt the mapping of dollar to value - e.g., the 10 million dollar valuation of the human life, but also the valuation then of all the things that year-dollar-cost could be spent on. Many of those things probably don't map very well between true value, and dollar cost (my go-to example of this is teachers fulfilling one of the most critical roles to ensure a functioning society, yet the dollar cost paid for their labor being typically far lower than most other jobs).
And indeed, accounting for externalities (unmeasured or unmeasurable) is a tough economic proposition. If it weren't hard to account for every single variable, creating a planned economy would be easier (ish).
FWIW, there's a whole sub-field just dedicated to determining the value of life for various purposes (a starting link: https://en.wikipedia.org/wiki/Value_of_life). You may disagree with any specific assessment, but then you have to argue how that value should be calculated differently.
So you could actually make an argument that to a country like the US, full 100% reliable asteroid protection is only worth like $50M/year (even if an impact means full extinction)?
So if upkeep for a detection/deflection system costs more than that we'd be "better off" just risking it?! Thats insane. I would have expected this number to be much higher than $50M/year.
one thing this analysis is missing is the smaller asteroids. for every planet altering asteroid, there are hundreds that could cause a tsunami that would wipe out a few cities
Good point, but I think those are "worth" less from a risk-analysis PoV: 1km diameter is apparently about 200 times more likely (1/500000 years) according to wiki, but would need to kill 40M people to match the extinction-level asteroid risk (so basically-- unmitigated hit on Tokyo or bust).
To be honest, I think the 1km diameter range might still be a major fraction of the actual risk, because the estimates around "human exctinction every 100Ma" are probably much too pessimistic.
There is also a difference between mass extinction asteroid like dinosaurs, and one that destroys human civilization. Smaller one wouldn't extinct humanity but would kill most of the people alive. 1km might be big enough to do that depending on the amount of dust and cooling.
The economists calculated the value of 1 life. The calculation might be different if it extinguishes the whole of humanity (and thousands of other species). In a way, it also presents all future human lives. Should we include those?
I don't believe that this would change the outcome much: It seems hard to argue that preservation of a nonhuman species would be worth more than a million lives (=> negligible) and assuming global loss of all human life is already unreasonably pessimistic in my view-- (e.g. the Chicxulub impactor would not have achieved this).
I also think that fully accounting for multi-generational consequences is murky/questionable and not really something we do even in much more obvious cases: Eligible people deciding against having children are not punished for depriving future society of centuries of expected workyears, and neither are mothers/fathers rewarded for the reverse.
But even if you accounted for losing 3 full generations and some change (for biodiversity loss), that still leaves you in the ~$200M/year range.
Currently we don't have reliable asteroid deflection capability at any price (but it would be technically somewhat in reach), but just imagine a future NASA budget discussion that goes "we're gonna have to mothball our asteroid deflector 3000 because it eats 5% of yearly NASA budget and thats just not worth it"-- that could be the mathematically correct choice, which confounds me.
I think where the calculations are breaking down is in the probability of asteroid strikes.
All the math assumes that the probabilities will follow historic trends and is relatively static. With single digit events, we really have no way in knowing what the actual likelihood of impact is. It could be 1 in 100 million, it could actually be 1 in 1 million and we've been rolling a bunch of nat 20s.
Before we build out the asteroid blaster 9000, the first step is detection. With that in place then we get actual good risk and probability calculations. If the detector tells us "There's no object that will strike earth in the next 1000 years" we can safely not put any budget into asteroid defense. If, on the other hand, the detector shows "Chicxulub 2.0 will hit in the next 100 years" then your probability of an impact is 1 and the actual budget worth it is going to be much closer to that $8e+16 number calculated earlier.
While I'm strongly supportive of survey astronomy in general...
We can already say that we have very high completion of cataloguing near-Earth objects that are anywhere near extinction-event / Chicxulub-sized (~10km), and have a majority of catastrophic / country-killer (~1km), and are digging deeper and deeper into regional / city-killer (~100m) bodies.
What we don't have is comets. Comets on long period orbits just aren't readily detectable with this sort of survey unless they're quite close in to the Sun, and I don't think we have great statistics on frequency vs size, size being something that requires very specific radar cross-checking to establish with any confidence. A long-period comet or hyperbolic body has a potential impact velocity much higher than inner system asteroids, and impact energy scales with impact velocity squared.
Will rubin detect comets? I'd assume not as it seems like they'll only really be visible as they approach the sun (or if they end up blocking a line of stars).
The problem is that the difference in optical/NIR brightness (apparent magnitude) between a long-period comet core that's going to hit us in 1000 years, and a long-period comet core that's going to hit us in six months, might be a factor 10^12 (magnitude 10 vs magnitude 40) or worse. Normally brightness drops off with distance squared for light sources, but comets without any tail or halo aren't emitting all that much light, they're reflecting it, and (except for a very brief period) they're about as far from us as they are from the sun. This means that brightness drops with distance to the fourth power. Cometary tails also only offgas a significant amount near the sun. Comet cores are expected to be extremely dark / low-reflectivity due to space weathering producing a carbon coating not unlike chimney-creosote.
You can fight this a bit by working in the thermal infrared, which you really need a specific sort of space telescope for. But long-period comets and hyperbolic impactors will be a probabilistic threat for the foreseeable future. I would say "Be thankful that they're so rare", but the data from observatories like Rubin on these bodies during points of their orbit where they're close enough to the sun to actually detect, is necessary to statistically characterize their existence with any confidence.
I agree that detection is a very helpful first step and almost enough on its own. But I'm unsure how far this can be pushed-- I think impact certainty for a century or more might be physically impossible, because of uncertainty in orbital parameters and chaotic behavior of the whole system.
I also believe the approximate bounds we have on impact probability are good enough for this estimate and quite unlikely to be off by a factor of 100, because we can guess at both size distribution and impact likelihood from craters (on earth and moon), and if the >10km object impact likelihood was over 1/million years we would expect to see a hundred times more craters of the corresponding size...
> I think impact certainty for a century or more might be physically impossible, because of uncertainty in orbital parameters and chaotic behavior of the whole system.
We already have 10s of years of certainty with the current observations. Most of the uncertainty comes from the interactions of unknown objects. As the mappings of objects increase, our predictions will become much better.
The other thing to consider is that large objects will have much better certainty. A 10km asteroid won't be influenced (much) by colliding with 100 1m asteroids. It will only be impacted if it hits or swings by something like a 1km asteroid.
Rubin should in a pretty short timeframe (a few years) give us an orbital mapping of all the >1km asteroids, which is pretty exciting.
The image of the woman holding the model of the sensor is nice because it includes a moon for scale.
Question I was curious about is whether or not the focal plane was flat (it is).
This is an interesting tidbit:
> Once images are taken, they are processed according to three different timescales, prompt (within 60 seconds), daily, and annually.
> The prompt products are alerts, issued within 60 seconds of observation, about objects that have changed brightness or position relative to archived images of that sky position. Transferring, processing, and differencing such large images within 60 seconds (previous methods took hours, on smaller images) is a significant software engineering problem by itself. This stage of processing will be performed at a classified government facility so events that would reveal secret assets can be edited out.
They are estimating 10 million alerts per night, which will be released publicly after the previously mentioned assessment takes place.
>The prompt products are alerts, issued within 60 seconds of observation, about objects that have changed brightness or position relative to archived images of that sky position. Transferring, processing, and differencing such large images within 60 seconds (previous methods took hours, on smaller images) is a significant software engineering problem by itself.[64]
>This stage of processing will be performed at a classified government facility so events that would reveal secret assets can be edited out.
"Let's look for spy satellites / orbiters" was an "application" I wondered about. My second thought about this was: maybe the US (and possibly other countries) already have something like this, but classified?
I expect a lot of events to get filtered that foreign governments expect to stay reasonably secret, even if they aren't friendly with the US. It's a game.
The thing that really saddens me is that the military gets to filter the data first and scientists only get to see the already manipulated data instead of a raw feed from their own instrument.
Back in January 2010 I went on a blind date with a lady who’s now my wife — an astrophysicist. We talked about this instrument and how Google would shuffle petabytes of raw observations, then distilling them into datasets researchers could actually use (don't know if Google is still involved?). We’ll celebrate 15 years of marriage this January, and I have been following the progress of this telescope since 2007 or so. It's amazing how long it takes for these instruments to come online, but the benefits are significant.
I'm the Rubin team member responsible for mapping the data into RGB images. I have been a long time reader of hacker news, but finally made an account to comment on this. I wanted to thank everyone here for their interest and taking their time to check out these images. Seeing everyone interested and engaged makes all the long hours worth it.
What range of wavelengths are in the original images? Do you produce multiple RGB images for looking at different things? c'mon, what does that entail? ;-)
The filters used for this range from near infrared to near uv. We used 4 different filters in all (for this image, the telescope has more). In general yes to fully appreciate all the color information as a human we need to generate different color combos so our eyes can pick up different contrasts.
However, what we strive for is being accurate to "if your eyes COULD see like this, it would look like this". To the best our our ability of course. We did a lot of research into human perception to create this and tired to map the information of color and intensity in a similar way to how your brain constructs that information into an image.
Let me tell you, I did not appreciate how deep a topic this was before starting, and how limited our file formats and electronic reproduction capabilities are for this. The data has such a range of information (in color and intensity) it is hard to encode into existing formats that most people are able to display. I really want to spend some time to do this in modern HDR (true HDR, not tone-mapping) where the brightness can actually be encoded separately than just RGB values. The documentation on these (several competing) formats is a bit all over the place though.
Edit:
I wanted to edit to add, if anyone reading this is an expert in HDR formats and or processing, I'd live to pick your brain a bit!
I'm impressed so much thought went into how to colorize the image! Sometimes it seems like space photos are just colorized thoughtlessly, or to increase the "wow" factor, so it's great to hear how careful and thoughtful you guys were in mapping this data to color-space.
Here's the SDSS view[0] of this featured[1] section from the Virgo Cluster, in comparison, to put the staggering depth of these exposures in their proper context,
Thanks for the link, I didn't know one can do this with Aladin Lite!
But to be fair, if we compare to DESI LS, it looks much less impressive. I.e. all the shells/tidal debris are basically visible in DESI.
I agree their results are also great! We do go a bit deeper, but he big difference it the speed we are able to build these images. We are able to image a larger area of the sky in each exposure, and are able to collect more light. This will lets us build images like this one in a few hours of observation, and build up an equivalent image of the entire southern hemisphere.
The amount of data this thing will be putting out every night is insane. For years now the community has been building the infrastructure to be able to efficiently consume it for useful science, but we still have work to do. Anyone interested in the problem of pipelining and distributing 10s of TB of data a night should check out the LSST and related GitHubs.
I've followed this project for over a decade and the amount of data they are moving around is fairly routine, given their budget size and access to computing and networking resources. The total storage (~40-50PB) is pretty large, but moving 10TB around the world isn't special engineering at this point.
It's not about the size of the data in bytes, it's also the amount of changes that need to be detected and alerts that need to be sent out (estimated at millions a night). Keep in mind the downstream consumers of this data are mostly small scientific outfits with extremely limited software engineering budgets.
I've worked on quite a few large-scale scientific collaborations like this (and also worked on/talked to the lead scientists of LSST) and typically, the end groups that do science aren't the ones handling the massive infrastructure. That typically goes to well-funded sites with great infrastructure who then provide straightforward ways for the smaller science groups to operate on the bits of data they care about.
Personally, I have pointed the grid folks (I used to work on grid) towards cloud, and many projects like this have a tier 1 in the cloud. The data lives in S3, metadata in some database, and use cloud provider's notification system. The scientists work in adjacent AWS accounts that have access to those systems and can move data pretty quickly.
The difference with this project is the data from Rubin itself isn’t where most of the scientific value comes from. It’s from follow up observations. Coordinating multiple observatories all with varying degrees of programmatic access in order to get timely observations is a challenge. But hey if you insist on being an “everything is easy” Andy I won’t bother anymore.
I've setup and built my own machines and clusters, as well as setting up grids, and industrial scale infrastructure. I've seen many closet clusters, and clusters administrated by grad students. Since then, I've gone nearly 100% cloud (with a strong preference for AWS).
In my experience, there are many tradeoffs using cloud but I think when you consider the entire context (people-cost-time-productivity) AWS ends up being a very powerful way to implement scientific infrastructure. However, in consortia like this, it's usually architected in a way that people with local infrastructure (campus clusters, colo) can contribute- although they tend to be "leaf" nodes in processing pipelines, rather than central players.
> For low earth, you also can’t usually offload data over the target.
That capability is coming with starlink laser modules. They've already tested this on a dragon mission, and they have the links working between some satellite shells. So you'd be able to offload data from pretty much everywhere starlink has presence.
So stoked for this observatory to go online! One cool uses it'll excel at is taking "deltas" between images and detect moving stuff. Close asteroids is one obvious goal, but I'm more interested in the next Oumuamua / Borisov like objects that come in from interstellar space. It would be amazing to get early warnings about those, and be able to study them with other powerful telescopes we have now.
(For those who haven't noticed, you can just simply paste 186.66721+8.89072 or whichever target you're curious about in an astronomy database like Aladin[0], and there right-click on "What is this?")
They look like they're roughly in the same plane. Is it safe to assume they're roughly in the same plane, or could they be really distant along the line of sight? The similarity in size makes me think they are, but I don't have any reason to be confident in that judgment.
It says that NGC 4410 is (gravitationally) interacting galaxies. After clicking through the link, it calls it RSCG 55 instead and explains more. I don't understand the naming scheme.
The naming scheme is based on the principle "tens of thousand of people have done this over thousands of years, and they all named things themselves". Its not uncommon for objects to have ~20 separate names[1], with some having over a hundred [2].
In this particular case, RSCG 55 means a group of galaxies[3], of which NGC 4410 is one member. Apparently RSCG is the "Redshift Survey Compact Groups" (https://cds.unistra.fr/cgi-bin/Dic-Simbad?RSCG) so 55 is just an index number.
That's also the case for the 4410 after NGC; in that case stands for "New General Catalog". In contrast the Sloan Digital Sky Survey gave NGC 4410 the name SDSS J122628.29+090111.4 where the numbers indicates its position in the sky.
The "index number" and the "position of the sky" are the two most popular naming strategies.
"like NGC 4410, above them in this image. The four interacting galaxies of that system are connected by tidal bridges, created by the gravity of each galaxy pulling on the others in the system."
I believe there would be a difference in their red/blue signatures if they were moving relative to each other, but as you say they clearly are on the same plane
For anyone that hasn't clicked the link, it shows that in just a few days, the observatory has already found over 2000 new asteroids. That is indeed very impressive.
Why do the brighter objects have the four way cross artifact? My (apparently incorrect) understanding was that those types of artifacts were a result of support structures holding reflecting mirrors on a telescope. But this camera just has a "standard" glass lense with nothing obstructing the light path to the sensor.
Those diffraction spikes are caused by the four-vane spider structure supporting the secondary mirror in the telescope's optical path, not by the camera lenses themselves.
You are not wholly wrong! There is both a supporting structure for the mirror, AND a glass lens in front of the sensor to further flatten the incoming light.
The interesting thing about the spikes in our images is that they stay fixed in image plane coordinates, not sky coordinates. So as the night sky moves (earth rotates) the spikes rotate relative to the sky leading to a star burst pattern over multiple exposures.
Ah, thanks. I had seen a bunch of hype about the camera itself (which is on its own very impressive) and assumed that was the complete device. Didn't realize it was part of a larger telescope.
Image creator here. This is such a massive dataset, most of the image processing needed to be custom written software pipelines. It not really practical for every pixel to be hand inspected. A few defects (and bright asteroids) imprinted through. It really hard to decide what is a real weird thing in the universe, and what is some sort of instrumental effect. We try to not pre-decide on what we think we should be seeing and filter for those by using things such as using classifiers. That leaves us with heuristics based on temporal information, size (is it smaller than a point spread function), and other related things. On large numbers of objects and pixels 1 in a thousand or 1 in a million outliers are bound to occur.
I'm glad you responded (i'm assuming you knew i wasn't criticizing the effort, but just in case -- I wasn't). I was assuming asteroid trail, but I've read that green stars can't exist and _could_ be a technosignature of "little green men". :) Your work on this is lovely. The combined effort of so many smart people over decades of work is truly heartening. Thank you.
At time 1:38:19 - one hour 38 minutes 19 seconds - into the livestream presentation, there's a slide that shows RGB streaks of fast-moving objects that were removed for the final image.
Those streaks are apparently asteroids.
Perhaps it is indeed a glitch or cosmic ray event.
While it would be cool to make a mobile app for this, having used both the tech stack behind this site (it's open source, and really great), and written a mobile app with a similar tech stack, $100k will get you one, but it's going to be a pain to debug all the various niggles around various devices, so you'll need at least double that to make a robust one (and then the question is, do you just go native and reimplement that tech stack).
Im not typically this person, but hacker news is just more and more "allow me to tell you", buddy I have 20 years experience building mobile apps... including against scientific, open source, and plain old hack-AF platforms...
Image creator here. Now imagine, when the survey is done, we will be able to see even fainter objects and image an area of the sky 1000x times this size.
Yeah I'm confused because couldn't a black hole between us and a star be the reason for a black spot? That times a bajillion for whatever else is out there.
Every set of deep field imagery reminds me that any point of light we see could be a star, a galaxy, or a cluster of galaxies. The universe is unimaginably vast.
For observatories like Rubin, is there a plan for keeping them open after the funding ends? Is it feasible for Chile to take over the project and keep it going?
On a practical note, what happens to a facility like this if one day it's just locked up? Will it degrade without routine maintenance, or will it still be operational in the event someone can put together funding?
There are already facilities like this (obviously not as new as Rubin) degrading due to funding, but this is because there's usually no better purpose for them. Space monitoring has been used in the past as a second life for facilities (outreach too), but ~1m class telescopes are good enough now that networks of them are better than a 40+ year old telescope. It's also worth noting bits can be reused: buildings gutted and repurposed, telescopes/instruments moved/sold on, etc.; but the real issue is having the staff to look after these places, and many older facilities are not always as amenable to automation as people might like (especially funding agencies).
Arecibo was about 60 years old for comparison when it collapsed, but there are lots of faculties that are effectively ships of Theseus, with new instruments coming in over time which refresh the faculty (and when that stops happening, then you get concerned).
It will continue with a new instrument after 10 years (spectroscopic) funding permitting. Tololo has been running since the 60s. In California, Lick has been running since the 1880s.
I get that it will run for a long time as long as someone is maintaining it. I am wondering what will happen if the doors are locked and the power is cut for an extended period of time (5+ years), as seems like a very real possibility unless an alternate source of funding can be found.
Even one zoom-in and I find something interesting.
What's that faint illuminated tendril extending from M61 (the large spiral galaxy at the bottom center of the image) upwards towards that red giant? It seems too straight and off-center to be an extension of the spiral arm.
EDIT: The supposed "Tidal tail" on M61 was evidently known from deep astrophotography, but only rarely detected & commented upon.
The zoomed images look grainy as one would expect from raw data, but I would have expected them to do dark field subtraction for the chips to minimize this effect. Does anyone know if that's done (or expressly avoided) in this context, or why it might not be as helpful (e.g., for longer exposures)?
Image creator here. We do dark field subtraction, as well as many other instrumental calibrations. What you are seeing is the fundamental photon noise. Because it is statistical in nature, you can never completely eliminate it. We could have chosen to put the black point in the image at a much higher flux level, but if you go to a high enough signal to noise level that you see no grain anywhere, you would miss out on so many interesting things that are still quite obvious to make out but are only 2-3 sigma above the noise.
Related: When a Telescope Is a National-Security Risk [1];
TL;DR: VCRO is capable of imaging spy- and other classified US satellites. An automated filtering system (involves routing through some government processing facility) is in place to remove them from the freshly captured raw data used for the public transient phenomena alert service. 3 days later, unredacted data is made available (by then the elusive, variable-orbit assets are long gone.)
Those are diffraction spikes, caused by how the light interacts with the support structure holding the secondary mirror. Each telescope has different patterns, hubble, jwst, etc. I think they only happen for stars, and not for galaxies (an easy way to know which is which), but I might be wrong on that (there's a possibility for faint stars not to have them IIRC).
This one's extra-special! The pattern is multiple + shapes, rotated and superimposed on top of each other. And they're different colors! That's this telescope's signature scanning algorithm—I don't know what that is, but, it's evident it takes multiple exposures, in different color filters, with the image plane rotated differently relative to the CCD plane in each exposure. I assume there's some kind of signal processing rationale behind that choice.
edit: Here's one of the bright stars, I think it's HD 107428:
This one has asteroid streaks surrounding it (it's a toggle in one of the hidden menus), which gives a strong clue about the timing of the multiple exposures. The asteroids are going in a straight line at a constant speed—the spacing and colors of the dots shows what the exposure sequence was.
I think this quote explains the reason they want to rotate the camera:
> "The ranking criteria also ensure that the visits to each field are widely distributed in position angle on the sky and rotation angle of the camera in order to minimize systematic effects in galaxy shape determination."
https://arxiv.org/abs/0805.2366 ("LSST [Vera Rubin]: from Science Drivers to Reference Design and Anticipated Data Products")
No, they happen for absolutely every externally-generated pixel of light (that is, not for shot noise, or firelflies that happen to fly between the mirrors). Where objects subtend more than one pixel, each pixel will generate it's own diffraction patterns, and the superposition of all are present in the final image. Of course, each diffraction pattern is offset from the next, so they mostly just broaden (smear out), not intensify.
However, the brightness of the diffraction effects is much lower than the light of the focused image itself. Where the image is itself dim, the diffraction effects might not add up to anything noticeable. Where the image supersaturates the detector (as can happen with a 1-pixel-wide star), the "much lower" fraction of that intensity can still be annoyingly visible.
It depends on the science you're doing as even these small effects add up, there's a project within the LSST science team (which a college is working on) to reduce this scattered light (search for "low surface brightness"), where there's a whole lot of work around modelling and understanding what effect the telescope system on the idealised single point that is a star.
There are projects (dragonfly and huntsman are the ones I know of) which avoid using mirrors and instead use lenses (which have their own issues) to reduce this scattered light.
The same effect is used for Bahtinov focusing masks. From what i know, all light will bend around the structures, but stars are bright and focused enough to see; in theory galaxies would too
Diffraction spikes [1] are a natural result of the wave-like nature of light, so they occur for all objects viewed through a telescope, and the exact pattern depends on the number and thickness of the vanes.
My favourite fact about these in relation to astronomy is that you can actually get rid of the diffraction spikes if your support vanes are curved, which ends up smearing out the diffraction pattern over a larger area [2]. However this is often not what you want in professional astronomy, because the smeared light can obscure faint objects you might want to see, like moons orbiting planets, planets orbiting stars, or lensed objects behind galaxies in deep space. So you often want sharp, crisp diffraction spikes so you can resolve these faint objects next to or behind the bright object that's up front.
"Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space."
“Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.” -Douglas Adams
I really like the Rubin because I think a lot of people focus too much on "deep" seeing (IE, looking at individual or several objects with very high magnification only once). The Rubin does much more "wide" seeing and this actually produces a ton of useful data- basically, enough data to collect reliable statistics about things. This helps refine cosmological models in ways that smaller individual observations cannot.
What's amazing to me is just how long it took to get to first photo- I was working on the design of the LSST scope well over 10 years ago, and the project had been underway for some time before that. It's hard to keep attention on projects for that long when a company can IPO and make billions in just a few years.
Speaking of wide-fields, check out the Xuntian space telescope, which has (will have) a 1.1 degree field of view and a 2.5 gigapixel camera.
My feeling is the "deep" vs "wide" thing is a circumstance of which groups you interact with (and also which facilities you have access to, and even to some extent the culture of your science community). Rubin is an example of what you can do when you build something massive specifically for a single purpose, and as more of these kind of facilities come online (SDSS and Gaia have been around for a while, but DESI, 4MOST and other similar facilities are coming, and let's not forget radio), it's what we get out of the whole suite supporting each other that gets the best science.
You worked on the design? That is interesting. I worked on the simulating the LSST , back in 2008 to 2010. The goal of which was to test the data reduction software. We were on the Image Simulation team.
It is surreal to see LSST/Rubin finally get first light.
Even more interesting to see who is still working on LSST, and who is not.
We also simulated the LSST- in this case, using Exacycle at google (an idle cycle harvester). We took a star catalog and passed it through a highly accurate ray tracer that simulated the light falling on the sensors (through space, atmosphere, etc). Apparently it found some bug in the design that was fixed before some expensive part was built (my coworkers were the subject matter expert, I mainly built the exacycle software and sat in on the meetings).
Deep is still interesting in understanding the origins of the universe. Rubin seems highly practical on the flip side. It'll be a super helpful tool in predicting asteroid impacts.
Also microlensing events, supernovae, and many other things in our very dynamic universe.
Also new planets! Planet Nine should likely be resolved within months, one way or another.
> "Probably within the first year we’re going to see if there’s something there or not,” says Pedro Bernardinelli, an astronomer at the University of Washington."
https://www.nationalgeographic.com/science/article/is-there-...
nice
Welcome to Hacker News!
It is generally recommended to upvote a comment you appreciate rather than making a comment that isn't adding substance. It helps keep the signal rate higher.
Or detecting more unusual interstellar objects like 'Oumuamua.
It's not just about pretty pictures (though those are great), it's about building massive datasets that let us actually do statistics on the universe
It does wide through image stacking/repeated visits. The speed and FOV is the key here.
Too late to edit - I meant Deep.
The "wide" mode is called "survey" astronomy, and there have been several large surveys like Rubin/LSST, going all the way back to the Sloan Digital Sky Survey, which started in 2000 (if you count surveys from before the era of digital sensors, there are surveys going back more than 100 years).[0] Rubin/LSST is just the newest and most advanced large, ground-based optical survey.
Both modes of observation - surveys and targeted observations of individual objects - are necessary for astronomical research. Often, large surveys are used to scan the sky, and then targeted observations are used to follow up on the most interesting objects.
0. https://en.wikipedia.org/wiki/Sloan_Digital_Sky_Survey
Note that "seeing" means something very specific in astronomy: https://en.wikipedia.org/wiki/Astronomical_seeing.
The asteroid detection capability is amazing: https://rubinobservatory.org/news/rubin-first-look/swarm-ast...
And supernovae: https://m.youtube.com/watch?v=Ch18t9cz-JU&pp=ygUETHNzdA%3D%3...
Among many other uses: https://m.youtube.com/watch?v=h6QYjNjivDE
That is likely the most unexcitedly unsettling video I have ever seen. Amazing storytelling really.
It’s like swimming in a lake or river and thinking the water is just water but then you take a closer look and it’s just incredibly alive to the point of absurdity.
I suppose the weeds, bugs, bacteria, frogs, fish, and snakes are equally unlikely to harm us, but nonetheless. Holy shit!
I was just coming back to comment on the existential dread elicited by that video.
This is really going to revolutionize our ability to detect and predict asteroid impact.
And just in the nick of time!
Whoa that's incredible.
(And amazing production of the actual video as well)
Pretty sure you can see some kind of masking for satellites in some of the frames of the asteroid videos.
I can't wait to see what it turns up once it's running full tilt
Wow, they should have led with this.
Which also tells the astronomical low odds of asteroids hitting earth even with “so many” of them. To me it changes nothing
If it has the potential to wipe out our entire species, but there's something we could do to prevent it (which I'm not sure about w/r/to asteroids), then it's worth looking out for the black swan event.
Doing some extremely rough math along these lines to double check myself:
* Gemini says that a dinosaur-extincting asteroid hits Earth about once every 100 million years. So in any given year that's 0.000001%.
* Economists say a human life is worth about 10 million dollars. There are about 8 billion people on Earth. So the total value of all human life is $80,000,000,000,000,000 (or 8e+16).
* So in any given year, the present value of asteroid protection is $800,000,000 (likelihood of an impact that year times value of the human life it would wipe out).
* The Guardian says the Vera Rubin telescope cost about $2,000,000,000 (2 billion).
By that measure, assuming the Rubin telescope prevents any dinosaur-extinction-level asteroid impacts, it will pay for itself in three years.
https://www.npr.org/transcripts/835571843
It seems incredibly bizarre to assign a monetary value to the elimination of all human life given the concept of monetary value would be wiped out along with the people.
The counterpoint is that not doing so (implying some sort of infinite monetary loss if the entire human species is wiped out) would mean you want to spend every single unit of monetary value of the entire global economy to preventing this (which is also obviously nonsense - people have to eat after all).
So you have to put the monetary value somewhere (although you're completely within your right to question this specific amount).
I think what I’m trying to express is that it feels like the answer isn’t any amount of money, it’s just undefined, like a division by zero or trying to read the value of a binary register on a machine that’s turned off. I think Pirsig called it a Mu answer.
That's the most interesting application of capitalism-as-as-resource-allocation mechanism I've ever seen, that's something I look forward to thinking about more.
My immediate reaction though is to doubt the mapping of dollar to value - e.g., the 10 million dollar valuation of the human life, but also the valuation then of all the things that year-dollar-cost could be spent on. Many of those things probably don't map very well between true value, and dollar cost (my go-to example of this is teachers fulfilling one of the most critical roles to ensure a functioning society, yet the dollar cost paid for their labor being typically far lower than most other jobs).
You're right to doubt it!
And indeed, accounting for externalities (unmeasured or unmeasurable) is a tough economic proposition. If it weren't hard to account for every single variable, creating a planned economy would be easier (ish).
FWIW, there's a whole sub-field just dedicated to determining the value of life for various purposes (a starting link: https://en.wikipedia.org/wiki/Value_of_life). You may disagree with any specific assessment, but then you have to argue how that value should be calculated differently.
These numbers are not what I expected at all.
So you could actually make an argument that to a country like the US, full 100% reliable asteroid protection is only worth like $50M/year (even if an impact means full extinction)?
So if upkeep for a detection/deflection system costs more than that we'd be "better off" just risking it?! Thats insane. I would have expected this number to be much higher than $50M/year.
one thing this analysis is missing is the smaller asteroids. for every planet altering asteroid, there are hundreds that could cause a tsunami that would wipe out a few cities
Good point, but I think those are "worth" less from a risk-analysis PoV: 1km diameter is apparently about 200 times more likely (1/500000 years) according to wiki, but would need to kill 40M people to match the extinction-level asteroid risk (so basically-- unmitigated hit on Tokyo or bust).
To be honest, I think the 1km diameter range might still be a major fraction of the actual risk, because the estimates around "human exctinction every 100Ma" are probably much too pessimistic.
There is also a difference between mass extinction asteroid like dinosaurs, and one that destroys human civilization. Smaller one wouldn't extinct humanity but would kill most of the people alive. 1km might be big enough to do that depending on the amount of dust and cooling.
they may be worth less, but each class of them bumps up the value of detection since detection helps against all of them.
The economists calculated the value of 1 life. The calculation might be different if it extinguishes the whole of humanity (and thousands of other species). In a way, it also presents all future human lives. Should we include those?
I don't believe that this would change the outcome much: It seems hard to argue that preservation of a nonhuman species would be worth more than a million lives (=> negligible) and assuming global loss of all human life is already unreasonably pessimistic in my view-- (e.g. the Chicxulub impactor would not have achieved this).
I also think that fully accounting for multi-generational consequences is murky/questionable and not really something we do even in much more obvious cases: Eligible people deciding against having children are not punished for depriving future society of centuries of expected workyears, and neither are mothers/fathers rewarded for the reverse.
But even if you accounted for losing 3 full generations and some change (for biodiversity loss), that still leaves you in the ~$200M/year range.
Currently we don't have reliable asteroid deflection capability at any price (but it would be technically somewhat in reach), but just imagine a future NASA budget discussion that goes "we're gonna have to mothball our asteroid deflector 3000 because it eats 5% of yearly NASA budget and thats just not worth it"-- that could be the mathematically correct choice, which confounds me.
I think where the calculations are breaking down is in the probability of asteroid strikes.
All the math assumes that the probabilities will follow historic trends and is relatively static. With single digit events, we really have no way in knowing what the actual likelihood of impact is. It could be 1 in 100 million, it could actually be 1 in 1 million and we've been rolling a bunch of nat 20s.
Before we build out the asteroid blaster 9000, the first step is detection. With that in place then we get actual good risk and probability calculations. If the detector tells us "There's no object that will strike earth in the next 1000 years" we can safely not put any budget into asteroid defense. If, on the other hand, the detector shows "Chicxulub 2.0 will hit in the next 100 years" then your probability of an impact is 1 and the actual budget worth it is going to be much closer to that $8e+16 number calculated earlier.
While I'm strongly supportive of survey astronomy in general...
We can already say that we have very high completion of cataloguing near-Earth objects that are anywhere near extinction-event / Chicxulub-sized (~10km), and have a majority of catastrophic / country-killer (~1km), and are digging deeper and deeper into regional / city-killer (~100m) bodies.
What we don't have is comets. Comets on long period orbits just aren't readily detectable with this sort of survey unless they're quite close in to the Sun, and I don't think we have great statistics on frequency vs size, size being something that requires very specific radar cross-checking to establish with any confidence. A long-period comet or hyperbolic body has a potential impact velocity much higher than inner system asteroids, and impact energy scales with impact velocity squared.
Will rubin detect comets? I'd assume not as it seems like they'll only really be visible as they approach the sun (or if they end up blocking a line of stars).
The problem is that the difference in optical/NIR brightness (apparent magnitude) between a long-period comet core that's going to hit us in 1000 years, and a long-period comet core that's going to hit us in six months, might be a factor 10^12 (magnitude 10 vs magnitude 40) or worse. Normally brightness drops off with distance squared for light sources, but comets without any tail or halo aren't emitting all that much light, they're reflecting it, and (except for a very brief period) they're about as far from us as they are from the sun. This means that brightness drops with distance to the fourth power. Cometary tails also only offgas a significant amount near the sun. Comet cores are expected to be extremely dark / low-reflectivity due to space weathering producing a carbon coating not unlike chimney-creosote.
You can fight this a bit by working in the thermal infrared, which you really need a specific sort of space telescope for. But long-period comets and hyperbolic impactors will be a probabilistic threat for the foreseeable future. I would say "Be thankful that they're so rare", but the data from observatories like Rubin on these bodies during points of their orbit where they're close enough to the sun to actually detect, is necessary to statistically characterize their existence with any confidence.
I agree that detection is a very helpful first step and almost enough on its own. But I'm unsure how far this can be pushed-- I think impact certainty for a century or more might be physically impossible, because of uncertainty in orbital parameters and chaotic behavior of the whole system.
I also believe the approximate bounds we have on impact probability are good enough for this estimate and quite unlikely to be off by a factor of 100, because we can guess at both size distribution and impact likelihood from craters (on earth and moon), and if the >10km object impact likelihood was over 1/million years we would expect to see a hundred times more craters of the corresponding size...
> I think impact certainty for a century or more might be physically impossible, because of uncertainty in orbital parameters and chaotic behavior of the whole system.
We already have 10s of years of certainty with the current observations. Most of the uncertainty comes from the interactions of unknown objects. As the mappings of objects increase, our predictions will become much better.
The other thing to consider is that large objects will have much better certainty. A 10km asteroid won't be influenced (much) by colliding with 100 1m asteroids. It will only be impacted if it hits or swings by something like a 1km asteroid.
Rubin should in a pretty short timeframe (a few years) give us an orbital mapping of all the >1km asteroids, which is pretty exciting.
Around 500 tonnes of meteorites hit earth every year.
Tracking large near earth objects is wise for several global and domestic security reasons.
Have a great day =3
The wikipedia article is quite good - https://en.wikipedia.org/wiki/Vera_C._Rubin_Observatory (Edit: Treasure trove of details in the references if any of your interests are adjacent to this)
The image of the woman holding the model of the sensor is nice because it includes a moon for scale.
Question I was curious about is whether or not the focal plane was flat (it is).
This is an interesting tidbit:
> Once images are taken, they are processed according to three different timescales, prompt (within 60 seconds), daily, and annually.
> The prompt products are alerts, issued within 60 seconds of observation, about objects that have changed brightness or position relative to archived images of that sky position. Transferring, processing, and differencing such large images within 60 seconds (previous methods took hours, on smaller images) is a significant software engineering problem by itself. This stage of processing will be performed at a classified government facility so events that would reveal secret assets can be edited out.
They are estimating 10 million alerts per night, which will be released publicly after the previously mentioned assessment takes place.
>The prompt products are alerts, issued within 60 seconds of observation, about objects that have changed brightness or position relative to archived images of that sky position. Transferring, processing, and differencing such large images within 60 seconds (previous methods took hours, on smaller images) is a significant software engineering problem by itself.[64]
>This stage of processing will be performed at a classified government facility so events that would reveal secret assets can be edited out.
Interesting, I'm guessing secret spy satellites?
"Let's look for spy satellites / orbiters" was an "application" I wondered about. My second thought about this was: maybe the US (and possibly other countries) already have something like this, but classified?
The US already has a very sophisticated system for this.
https://en.wikipedia.org/wiki/United_States_Space_Surveillan...
There is the Space Surveillance Telescope [1] in Australia. It is similar military telescope for space tracking. It is only 3.5m compared to LSST 8m.
1: https://en.wikipedia.org/wiki/Space_Surveillance_Telescope
note that lots of the lsst funding is from the DOE. part of the value for the government might be tracking Chinese satellites
I expect a lot of events to get filtered that foreign governments expect to stay reasonably secret, even if they aren't friendly with the US. It's a game.
The thing that really saddens me is that the military gets to filter the data first and scientists only get to see the already manipulated data instead of a raw feed from their own instrument.
I thought all satellites already have known orbits?
Both because they can't be made invisible, and because you need to avoid collisions.
Many can (and do) change orbits.
it’s spy satellites (mainly domestic). In some cases, they don’t actually need to be removed, just embargoed until orbital change.
.. and aliens, of course ...
Back in January 2010 I went on a blind date with a lady who’s now my wife — an astrophysicist. We talked about this instrument and how Google would shuffle petabytes of raw observations, then distilling them into datasets researchers could actually use (don't know if Google is still involved?). We’ll celebrate 15 years of marriage this January, and I have been following the progress of this telescope since 2007 or so. It's amazing how long it takes for these instruments to come online, but the benefits are significant.
> We’ll celebrate 15 years of marriage this January,
Congrats!
I'm the Rubin team member responsible for mapping the data into RGB images. I have been a long time reader of hacker news, but finally made an account to comment on this. I wanted to thank everyone here for their interest and taking their time to check out these images. Seeing everyone interested and engaged makes all the long hours worth it.
Thank you for your work!
What range of wavelengths are in the original images? Do you produce multiple RGB images for looking at different things? c'mon, what does that entail? ;-)
The filters used for this range from near infrared to near uv. We used 4 different filters in all (for this image, the telescope has more). In general yes to fully appreciate all the color information as a human we need to generate different color combos so our eyes can pick up different contrasts.
However, what we strive for is being accurate to "if your eyes COULD see like this, it would look like this". To the best our our ability of course. We did a lot of research into human perception to create this and tired to map the information of color and intensity in a similar way to how your brain constructs that information into an image.
Let me tell you, I did not appreciate how deep a topic this was before starting, and how limited our file formats and electronic reproduction capabilities are for this. The data has such a range of information (in color and intensity) it is hard to encode into existing formats that most people are able to display. I really want to spend some time to do this in modern HDR (true HDR, not tone-mapping) where the brightness can actually be encoded separately than just RGB values. The documentation on these (several competing) formats is a bit all over the place though.
Edit: I wanted to edit to add, if anyone reading this is an expert in HDR formats and or processing, I'd live to pick your brain a bit!
I'm impressed so much thought went into how to colorize the image! Sometimes it seems like space photos are just colorized thoughtlessly, or to increase the "wow" factor, so it's great to hear how careful and thoughtful you guys were in mapping this data to color-space.
Here's the SDSS view[0] of this featured[1] section from the Virgo Cluster, in comparison, to put the staggering depth of these exposures in their proper context,
[0] https://aladin.cds.unistra.fr/AladinLite/?target=12%2026%205...
[1] https://rubinobservatory.org/gallery/collections/first-look-...
With an opacity slider, for easy comparison:
https://aladin.cds.unistra.fr/AladinLite/?baseImageLayer=CDS...
Thanks for the link, I didn't know one can do this with Aladin Lite! But to be fair, if we compare to DESI LS, it looks much less impressive. I.e. all the shells/tidal debris are basically visible in DESI.
Agreed. Here is the link: https://aladin.cds.unistra.fr/AladinLite/?baseImageLayer=CDS...
I agree their results are also great! We do go a bit deeper, but he big difference it the speed we are able to build these images. We are able to image a larger area of the sky in each exposure, and are able to collect more light. This will lets us build images like this one in a few hours of observation, and build up an equivalent image of the entire southern hemisphere.
The amount of data this thing will be putting out every night is insane. For years now the community has been building the infrastructure to be able to efficiently consume it for useful science, but we still have work to do. Anyone interested in the problem of pipelining and distributing 10s of TB of data a night should check out the LSST and related GitHubs.
I've followed this project for over a decade and the amount of data they are moving around is fairly routine, given their budget size and access to computing and networking resources. The total storage (~40-50PB) is pretty large, but moving 10TB around the world isn't special engineering at this point.
It's not about the size of the data in bytes, it's also the amount of changes that need to be detected and alerts that need to be sent out (estimated at millions a night). Keep in mind the downstream consumers of this data are mostly small scientific outfits with extremely limited software engineering budgets.
Again, nothing special. The small outfits aren't going to be doing the critical processing.
…they do the science
I've worked on quite a few large-scale scientific collaborations like this (and also worked on/talked to the lead scientists of LSST) and typically, the end groups that do science aren't the ones handling the massive infrastructure. That typically goes to well-funded sites with great infrastructure who then provide straightforward ways for the smaller science groups to operate on the bits of data they care about.
Here's the canonical example: https://home.cern/science/computing/grid and a lab that didn't have enough horsepower using a different grid: https://osg-htc.org/spotlights/new-frontiers-at-thyme-lab.ht...
Personally, I have pointed the grid folks (I used to work on grid) towards cloud, and many projects like this have a tier 1 in the cloud. The data lives in S3, metadata in some database, and use cloud provider's notification system. The scientists work in adjacent AWS accounts that have access to those systems and can move data pretty quickly.
The difference with this project is the data from Rubin itself isn’t where most of the scientific value comes from. It’s from follow up observations. Coordinating multiple observatories all with varying degrees of programmatic access in order to get timely observations is a challenge. But hey if you insist on being an “everything is easy” Andy I won’t bother anymore.
If you’re dealing with a fairly constant amount of data every day for years, using the cloud will be way more expensive than necessary.
The whole thread comes off as an AWS sales pitch...
I've setup and built my own machines and clusters, as well as setting up grids, and industrial scale infrastructure. I've seen many closet clusters, and clusters administrated by grad students. Since then, I've gone nearly 100% cloud (with a strong preference for AWS).
In my experience, there are many tradeoffs using cloud but I think when you consider the entire context (people-cost-time-productivity) AWS ends up being a very powerful way to implement scientific infrastructure. However, in consortia like this, it's usually architected in a way that people with local infrastructure (campus clusters, colo) can contribute- although they tend to be "leaf" nodes in processing pipelines, rather than central players.
Why move the data? Why not just enable permissions on cloud sharing a la Snowflake or iceberg?
Sure, that also works, although it often leads to problems around cost and scalability and environment customization.
Yep, the data engineering side of this is just as fascinating as the astronomy
Is this not the same problem high resolution spy satellites have? Seems like a fair bit of crossover at least?
Spy sats are more bandwidth and power constrained. For low earth, you also can’t usually offload data over the target.
> For low earth, you also can’t usually offload data over the target.
That capability is coming with starlink laser modules. They've already tested this on a dragon mission, and they have the links working between some satellite shells. So you'd be able to offload data from pretty much everywhere starlink has presence.
Vera Ruben is producing ~4gbps constantly. just dealing with the heat to send that much data is highly nontrivial.
So stoked for this observatory to go online! One cool uses it'll excel at is taking "deltas" between images and detect moving stuff. Close asteroids is one obvious goal, but I'm more interested in the next Oumuamua / Borisov like objects that come in from interstellar space. It would be amazing to get early warnings about those, and be able to study them with other powerful telescopes we have now.
> So stoked for this observatory to go online!
Second this, but other areas are of great interest too. Kuiper Belt discoveries and surveys FTW!
Counter-rotating spiral galaxies. Super neat! https://skyviewer.app/embed?target=186.66721+8.89072&fov=0.2...
> "?target=186.66721+8.89072"
(For those who haven't noticed, you can just simply paste 186.66721+8.89072 or whichever target you're curious about in an astronomy database like Aladin[0], and there right-click on "What is this?")
[0] https://aladin.cds.unistra.fr/AladinLite/?target=12%2026%204...
I wonder if there's some kind of gravitational lensing going on. A lot of the galaxies look similar, but in different orientations.
https://skyviewer.app/embed?target=186.66721+8.89072&fov=0.2...
https://skyviewer.app/embed?target=185.46019+4.48014&fov=0.6...
https://skyviewer.app/embed?target=188.49629+8.40493&fov=1.3...
(Quick side note, if you go to /explorer instead of /embed you can zoom out so you can see the whole image at once)
https://skyviewer.app/explorer?target=187.69717+12.33897&fov...
That is interesting!
They look like they're roughly in the same plane. Is it safe to assume they're roughly in the same plane, or could they be really distant along the line of sight? The similarity in size makes me think they are, but I don't have any reason to be confident in that judgment.
Those are NGC 4411 a+b and they're indeed right next to each other,
https://noirlab.edu/public/images/iotw2421b/ ("thought to be right next to each other — both at a distance of about 50 million light-years")
What's going on directly above with what looks to be 3-4 galaxies interacting?
It says that NGC 4410 is (gravitationally) interacting galaxies. After clicking through the link, it calls it RSCG 55 instead and explains more. I don't understand the naming scheme.
The naming scheme is based on the principle "tens of thousand of people have done this over thousands of years, and they all named things themselves". Its not uncommon for objects to have ~20 separate names[1], with some having over a hundred [2].
In this particular case, RSCG 55 means a group of galaxies[3], of which NGC 4410 is one member. Apparently RSCG is the "Redshift Survey Compact Groups" (https://cds.unistra.fr/cgi-bin/Dic-Simbad?RSCG) so 55 is just an index number.
That's also the case for the 4410 after NGC; in that case stands for "New General Catalog". In contrast the Sloan Digital Sky Survey gave NGC 4410 the name SDSS J122628.29+090111.4 where the numbers indicates its position in the sky.
The "index number" and the "position of the sky" are the two most popular naming strategies.
[1] NGC 4410 has 37, but the NGC objects are among the more popular https://simbad.u-strasbg.fr/simbad/sim-id?Ident=+NGC+4410&Nb... [2] https://simbad.u-strasbg.fr/simbad/sim-id?Ident=M87&submit=s... [3] https://simbad.u-strasbg.fr/simbad/sim-id?Ident=RSCG+55&NbId...
"like NGC 4410, above them in this image. The four interacting galaxies of that system are connected by tidal bridges, created by the gravity of each galaxy pulling on the others in the system."
Dang. I think I got terminology blinded by the time I got there.
I believe there would be a difference in their red/blue signatures if they were moving relative to each other, but as you say they clearly are on the same plane
Check out this video: https://rubinobservatory.org/gallery/collections/first-look-...
Incredible.
For anyone that hasn't clicked the link, it shows that in just a few days, the observatory has already found over 2000 new asteroids. That is indeed very impressive.
Imagine what it'll be turning up once the full survey is underway
Why do the brighter objects have the four way cross artifact? My (apparently incorrect) understanding was that those types of artifacts were a result of support structures holding reflecting mirrors on a telescope. But this camera just has a "standard" glass lense with nothing obstructing the light path to the sensor.
Those diffraction spikes are caused by the four-vane spider structure supporting the secondary mirror in the telescope's optical path, not by the camera lenses themselves.
You are not wholly wrong! There is both a supporting structure for the mirror, AND a glass lens in front of the sensor to further flatten the incoming light.
The interesting thing about the spikes in our images is that they stay fixed in image plane coordinates, not sky coordinates. So as the night sky moves (earth rotates) the spikes rotate relative to the sky leading to a star burst pattern over multiple exposures.
It’s a reflecting telescope, not a camera with a glass lens.
Ah, thanks. I had seen a bunch of hype about the camera itself (which is on its own very impressive) and assumed that was the complete device. Didn't realize it was part of a larger telescope.
something green: https://skyviewer.app/embed?target=186.82033+8.25479&fov=0.0...
Image creator here. This is such a massive dataset, most of the image processing needed to be custom written software pipelines. It not really practical for every pixel to be hand inspected. A few defects (and bright asteroids) imprinted through. It really hard to decide what is a real weird thing in the universe, and what is some sort of instrumental effect. We try to not pre-decide on what we think we should be seeing and filter for those by using things such as using classifiers. That leaves us with heuristics based on temporal information, size (is it smaller than a point spread function), and other related things. On large numbers of objects and pixels 1 in a thousand or 1 in a million outliers are bound to occur.
I'm glad you responded (i'm assuming you knew i wasn't criticizing the effort, but just in case -- I wasn't). I was assuming asteroid trail, but I've read that green stars can't exist and _could_ be a technosignature of "little green men". :) Your work on this is lovely. The combined effort of so many smart people over decades of work is truly heartening. Thank you.
Could be a satellite that moved into the frame during green.
There was a livestream presentation and press conference up on YouTube
https://www.youtube.com/live/Zv22_Amsreo?si=zQLeGfJokZoCPkji
At time 1:38:19 - one hour 38 minutes 19 seconds - into the livestream presentation, there's a slide that shows RGB streaks of fast-moving objects that were removed for the final image.
Those streaks are apparently asteroids.
Perhaps it is indeed a glitch or cosmic ray event.
(Is there a better URL for the slide deck?)
something red https://skyviewer.app/embed?target=186.82033+8.25479&fov=0.0...
fixed link: https://skyviewer.app/embed?target=187.04483+7.00898&fov=0.2...
might be bad cosmic ray rejection during green exposure
https://skyviewer.app/
We got DOGE instead of using $100k of tax dollars making this into a super nice public mobile app...
While it would be cool to make a mobile app for this, having used both the tech stack behind this site (it's open source, and really great), and written a mobile app with a similar tech stack, $100k will get you one, but it's going to be a pain to debug all the various niggles around various devices, so you'll need at least double that to make a robust one (and then the question is, do you just go native and reimplement that tech stack).
Im not typically this person, but hacker news is just more and more "allow me to tell you", buddy I have 20 years experience building mobile apps... including against scientific, open source, and plain old hack-AF platforms...
My God, it's full of stars
Image creator here. Now imagine, when the survey is done, we will be able to see even fainter objects and image an area of the sky 1000x times this size.
brings up that old paradox - should any line of sight ultimately end up at a star?
https://en.wikipedia.org/wiki/Olbers%27s_paradox
Note that it makes a lot of assumptions beyond the stated ones, such as:
* the only objects in space are stars
* all stars are equally bright
* the average brightness is one that can be seen
(unless you roll all this into "homogeneous"?)
Yeah I'm confused because couldn't a black hole between us and a star be the reason for a black spot? That times a bajillion for whatever else is out there.
gravitational lensing would make the light go around the black hole.
PetaPixel has a decent article / video on the topic from a visit to the observatory:
* https://petapixel.com/2025/06/23/hands-on-at-the-vera-c-rubi...
Not super technical, but a little higher level (with decent analogies to photography, for their traditional audience).
I really like that they mentioned how the telescope will be sub-optimal for wedding photography
Every set of deep field imagery reminds me that any point of light we see could be a star, a galaxy, or a cluster of galaxies. The universe is unimaginably vast.
For observatories like Rubin, is there a plan for keeping them open after the funding ends? Is it feasible for Chile to take over the project and keep it going?
On a practical note, what happens to a facility like this if one day it's just locked up? Will it degrade without routine maintenance, or will it still be operational in the event someone can put together funding?
There are already facilities like this (obviously not as new as Rubin) degrading due to funding, but this is because there's usually no better purpose for them. Space monitoring has been used in the past as a second life for facilities (outreach too), but ~1m class telescopes are good enough now that networks of them are better than a 40+ year old telescope. It's also worth noting bits can be reused: buildings gutted and repurposed, telescopes/instruments moved/sold on, etc.; but the real issue is having the staff to look after these places, and many older facilities are not always as amenable to automation as people might like (especially funding agencies).
Arecibo was about 60 years old for comparison when it collapsed, but there are lots of faculties that are effectively ships of Theseus, with new instruments coming in over time which refresh the faculty (and when that stops happening, then you get concerned).
I just skimmed the budget request, and it looks like NSF is planning on keeping Vera C. Rubin at least through 2026. Really good news!
It will continue with a new instrument after 10 years (spectroscopic) funding permitting. Tololo has been running since the 60s. In California, Lick has been running since the 1880s.
I get that it will run for a long time as long as someone is maintaining it. I am wondering what will happen if the doors are locked and the power is cut for an extended period of time (5+ years), as seems like a very real possibility unless an alternate source of funding can be found.
That won’t happen for this telescope. It has so many unique capabilities that other telescopes would be shuttered first.
Even one zoom-in and I find something interesting.
What's that faint illuminated tendril extending from M61 (the large spiral galaxy at the bottom center of the image) upwards towards that red giant? It seems too straight and off-center to be an extension of the spiral arm.
EDIT: The supposed "Tidal tail" on M61 was evidently known from deep astrophotography, but only rarely detected & commented upon.
I was surprised by how many lensed objects I could spot.
Related: https://www.nytimes.com/2025/06/23/science/vera-rubin-scient...
(via https://news.ycombinator.com/item?id=44352455, but no comments there)
Petition to name those two mirrored galaxies "Wax on" and "wax off"?
I'll see myself out.
Took me a while but I got it
The zoomed images look grainy as one would expect from raw data, but I would have expected them to do dark field subtraction for the chips to minimize this effect. Does anyone know if that's done (or expressly avoided) in this context, or why it might not be as helpful (e.g., for longer exposures)?
Image creator here. We do dark field subtraction, as well as many other instrumental calibrations. What you are seeing is the fundamental photon noise. Because it is statistical in nature, you can never completely eliminate it. We could have chosen to put the black point in the image at a much higher flux level, but if you go to a high enough signal to noise level that you see no grain anywhere, you would miss out on so many interesting things that are still quite obvious to make out but are only 2-3 sigma above the noise.
Seems this will be done on the 'nightly' release cadence. Found on page 11 in this doc that I found from the wikipedia page:
https://docushare.lsstcorp.org/docushare/dsweb/Get/LSE-163/L...
For a step by step tour: https://skyviewer.app/tours/cosmic-treasure-chest/
Related: When a Telescope Is a National-Security Risk [1];
TL;DR: VCRO is capable of imaging spy- and other classified US satellites. An automated filtering system (involves routing through some government processing facility) is in place to remove them from the freshly captured raw data used for the public transient phenomena alert service. 3 days later, unredacted data is made available (by then the elusive, variable-orbit assets are long gone.)
[1] https://www.theatlantic.com/science/archive/2024/12/vera-rub...
Why are there lens-flare-like artifacts around some of the bright objects?
Those are diffraction spikes, caused by how the light interacts with the support structure holding the secondary mirror. Each telescope has different patterns, hubble, jwst, etc. I think they only happen for stars, and not for galaxies (an easy way to know which is which), but I might be wrong on that (there's a possibility for faint stars not to have them IIRC).
> "Each telescope has different patterns"
This one's extra-special! The pattern is multiple + shapes, rotated and superimposed on top of each other. And they're different colors! That's this telescope's signature scanning algorithm—I don't know what that is, but, it's evident it takes multiple exposures, in different color filters, with the image plane rotated differently relative to the CCD plane in each exposure. I assume there's some kind of signal processing rationale behind that choice.
edit: Here's one of the bright stars, I think it's HD 107428:
https://i.ibb.co/HTmP0rqn/diffraction.webp
This one has asteroid streaks surrounding it (it's a toggle in one of the hidden menus), which gives a strong clue about the timing of the multiple exposures. The asteroids are going in a straight line at a constant speed—the spacing and colors of the dots shows what the exposure sequence was.
I think this quote explains the reason they want to rotate the camera:
> "The ranking criteria also ensure that the visits to each field are widely distributed in position angle on the sky and rotation angle of the camera in order to minimize systematic effects in galaxy shape determination."
https://arxiv.org/abs/0805.2366 ("LSST [Vera Rubin]: from Science Drivers to Reference Design and Anticipated Data Products")
> with the image plane rotated differently relative to the CCD plane in each exposure
LSST is a alt/az telescope. The earth rotates. The sensor plane must rotate during the exposure to prevent stars from streaking, which it accomplishes via this platform: https://docushare.lsstcorp.org/docushare/dsweb/Get/Document-...
The fact that the sensor rotates without the spider rotating also spreads out the diffraction spikes.
But that rotation is limited, so between different exposures with different filters the image plane will be rotated relative to the sky.
As the quote goes the change in orientation has benefits for controlling systematics..
No, they happen for absolutely every externally-generated pixel of light (that is, not for shot noise, or firelflies that happen to fly between the mirrors). Where objects subtend more than one pixel, each pixel will generate it's own diffraction patterns, and the superposition of all are present in the final image. Of course, each diffraction pattern is offset from the next, so they mostly just broaden (smear out), not intensify.
However, the brightness of the diffraction effects is much lower than the light of the focused image itself. Where the image is itself dim, the diffraction effects might not add up to anything noticeable. Where the image supersaturates the detector (as can happen with a 1-pixel-wide star), the "much lower" fraction of that intensity can still be annoyingly visible.
It depends on the science you're doing as even these small effects add up, there's a project within the LSST science team (which a college is working on) to reduce this scattered light (search for "low surface brightness"), where there's a whole lot of work around modelling and understanding what effect the telescope system on the idealised single point that is a star.
There are projects (dragonfly and huntsman are the ones I know of) which avoid using mirrors and instead use lenses (which have their own issues) to reduce this scattered light.
The same effect is used for Bahtinov focusing masks. From what i know, all light will bend around the structures, but stars are bright and focused enough to see; in theory galaxies would too
Diffraction spikes [1] are a natural result of the wave-like nature of light, so they occur for all objects viewed through a telescope, and the exact pattern depends on the number and thickness of the vanes.
My favourite fact about these in relation to astronomy is that you can actually get rid of the diffraction spikes if your support vanes are curved, which ends up smearing out the diffraction pattern over a larger area [2]. However this is often not what you want in professional astronomy, because the smeared light can obscure faint objects you might want to see, like moons orbiting planets, planets orbiting stars, or lensed objects behind galaxies in deep space. So you often want sharp, crisp diffraction spikes so you can resolve these faint objects next to or behind the bright object that's up front.
[1] https://www.celestron.com/blogs/knowledgebase/what-is-a-diff...
[2] https://www.fpi-protostar.com/img/spikes.gif
Those are stars, they create those lens flares because they are so bright.
All the dim fuzzy objects are galaxies much further away.
What is this? https://skyviewer.app/embed?target=183.65537+6.09434&fov=0.0...
Looks like two galaxies interacting/merging.
The potential for discoveries here seems enormous
"Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space."
Amazing
Jesus H Christ, the Universe is big.
“Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.” -Douglas Adams
[dead]
Sometimes I feel like a diatom floiting in the ocean
[dead]
[dead]
[dead]
[flagged]