Podcast: Play in new window | Download
Subscribe: RSS
Our senses can only detect a fraction of the phenomena happening in the Universe. That’s why scientists and engineers develop detectors, to let us see radiation and particles that we could never detect with our eyes and ears. This week we’ll go through them all, so you can understand how we see what we can’t see.
|
Shownotes
Pamela’s interview with The Naked Scientists
- The original detector: the human eyeball; The Eye and How We See — Vision Learning Center
- The human eye –– Wiki
- Rods and Cones
- Why amateur astronomers use red flashlights
- Glass photographic plates — Wiki
- Charge Coupled Device (CCD) — Wiki
- How Does the Hubble Space Telescope Work? HubbleSite
- Ultraviolet waves — NASA
- Ultraviolet astronomy — Wiki
- Infrared waves — NASA
- List of optical and infrared observatories
- Radio waves — NASA
- Radio telescopes — Wiki
- Microwaves — NASA
- Microwave astronomy — Western Australian Astronomy
- X-Rays — NASA
- X-Ray astronomy — NASA
- Gamma Rays — NASA
- Gamma Ray Astronomy — NASA
- Scintillation in astronomy — Wiki
- Cherenkov Radiation — World of Physics
- Neutrinos
Transcript: Detectors
Fraser Cain: Pamela is back from Europe.
Dr. Pamela Gay: Oh, it was a wonderful trip and next time you have to go with me Fraser.
Fraser: Yeah, no problem [Laughter]. So, where did you go? What did you do?
Pamela: I went to Munich and I walked all over the city because I discovered nothing is open on Sundays so I just walked. It was pretty, lots of cool stuff.
Fraser: Munich is a great city I really liked it.
Pamela: I visited the European Space Agency/the European Southern Observatory facility unit or Garsching. I then visited the wonderful Dr. Chris Lintott at Oxford and we plotted to do things for the International Year of Astronomy.
Then I went to Cambridge for a joint meeting of the British Astronomical Association and the American Association of Variable Star Observers.
I also happened to meet Chris Smith, the Naked Scientist himself. Lots of great people, many of them named Chris.
Fraser: You were interviewed on his show, right?
Pamela: It should be there when you hear this. You should be able to click over to the Naked Scientist and hear me talking a bit about my research and one of the amateurs that takes data for me and points out really cool stars to me was interviewed as well.
Fraser: Awesome. Well we’ll try to find that link to it from the show notes.
This week, our senses can only detect a fraction of the phenomena happening in the Universe. That’s why Scientists and Engineers developed detectors to let us see radiation and particles that we could never detect with our eyes and ears. This week we’ll go through them so you can understand how we what we can’t see.
Where do you want to start Pamela?
Pamela: Why don’t we start with the most commonly used detector of them all, the human eyeball?
Fraser: Perfect. So let’s talk about human eyes, I have two.
Pamela: Okay, so the human eyeball is sensitive in a bunch of different ways. There is first of all, the day-to-day color vision which is the way we think most of the time.
We can see from about 400 nanometers which is blue, to 700 nanometers which is red.
Fraser: That’s the wavelength of the light.
Pamela: Yes. That’s the wavelength of the light.
Fraser: Four hundred up and downs per nanometer, is that right?
Pamela: Yes, peak to peak that’s the separation between the peaks in the wavelength.
Fraser: Right.
Pamela: And to get color vision, we actually have three different cone cells in the eye that are sensitive to different sets of wavelengths. Where one sees blue another sees green and another sees red. Our brain is able to sort out these three different sets of input and detect a whole lot of red photons and a few blue photons and a few green photons and translate that into a color.
The catch is, since each particular type of cell is only sensitive to one of these three bands of color, we can only trigger our eyes on certain colors. In really, really low light conditions there may not be enough photons of any one color to trigger our eyes to see something.
To make up for that, because human beings don’t want to be eaten by things in the woods after dark, our eyes, particularly in the peripheral vision have these things called rods. Rods are extremely insensitive to red.
With other colors, blues, greens, yellows and things like that they simply recognize light, no light. In this way we’re able to see very faint objects at night that aren’t red.
This is somewhat weird if you look at rose bushes during daylight where the red flower stands out. Then at night where all you see is the green light.
Fraser: Right. It’s almost like your vision turns black and white at night. You can see enough to not bump into things but you can’t make out really sensitive color differences.
Pamela: Also, one of the weird things because these are all chemical reactions, is your eyes actually take time to adjust to the darkness. It takes time for them to fully dilate and it takes time for extra chemicals to build up in your eyes such that if you do get a burst of light, they trigger.
Because of this at night, you only want to use red lights which the color sensitive parts of your eyes will trigger on but the black and white sensitive parts of your eyes will utterly ignore.
Fraser: I have a flashlight that I can put a red filter over the front of it that when I’m outside looking at star charts, it doesn’t ruin my night vision.
Pamela: This is the same reason that brake lights on the backs of cars are red. It’s the same reason that under war conditions ships go to red alert and everything goes to red, and you can dark adapt so if something bad happens and all the lights go out, you can still see.
Fraser: I didn’t know that. That’s where red alert comes from.
Pamela: Right, that’s where red alert comes from. It’s a way of working to protect your night vision; protect your low light vision. Once you get that low light vision all set up and you’re ready to go, the human eye has an amazing dynamic range.
We can go from basically being able to trigger on as few as five to ten photons that arrive within about a hundred milliseconds of one another, to being able to see something 250 times brighter than that before it starts to do bad things to our eyeballs.
We have a factor of 250 between looking at nice bright star on the sky say Sirius to something 250 times fainter than that. That’s kind of cool to think about.
Fraser: Where does our eye fall down?
Pamela: Well, we are limited in the colors that we can see. Once you start using cameras and photoelectric detectors, you can get more than a 250 factor between bright and faint objects.
While we have a pretty good dynamic range and while we can see from blue to red quite happily, it would be kind of nice if we could see in radio light or infrared light. Maybe not day to day, but scientifically it would be nice to be able to see in these other colors.
X-ray is just another color, but not one the human eye can see. To start to see these other things and to start to be able to see fainter things or record what we see, we need to start moving to detectors.
Fraser: Right, the recording is the problem. You may get a handful full of photons fall on your eyes but it’s just a snapshot in time. Your brain says, well not enough, I’m not going to see anything.
But, if you could look at something that over hours and weeks the photons keep falling, you could eventually see something important there. But we just can’t record anything so it’s just all thrown away.
Pamela: Not only that, but the human eye is not a perfect recorder. If you’re out on a really bright day and you look out into a blue sky, you may see these weird things swimming through your vision called floaters.
You may, when you are at the eye doctor and he shines the bright light in your eye see actually the back of your eye get reflected around such that you can see it yourself. All these different things can crop up in our astronomical sketches and we might think they’re actually part of what we’re looking at.
The canals on Mars weren’t actually there, they were stretching the human vision beyond what it was designed to do. So we have to be aware of our biological defects.
Fraser: The astronomical detector equivalent of the eye is the telescope?
Pamela: The telescope can be used with the human eye. The astronomical detector that is perhaps the modern day equivalent is the CCD array. It used to be cameras.
Fraser: Right, or film, okay.
Pamela: We went from originally using glass plates. Glass plates were a bit evil, because first of all you had to develop them and anyone who has ever developed film knows some days are better than others.
You also had to pre-illuminate them and all sorts of scary things and once you’d done all of that, they weren’t necessarily linear detectors. So, there might be a factor of two brightness between 2 objects and there might be a factor of 3 brightness between one of those and a third one, but you don’t see these factors when you look at it. You see one as being two and a half times brighter.
This nonlinearity, the numbers don’t line up equally separated on the ruler problem can make it hard to analyze what you’re taking pictures of.
Fraser: Right it’s super hard. I don’t spend time in a dark room, but it’s really hard to get the contrast the same from picture to picture when you’re just working to develop it.
Pamela: And the chemicals don’t trigger linearly across the faintest objects and the brightest objects. It’s not the most sensitive thing in the world either. In an ideal situation, for every one photon that hits whatever your detector is you’ll be able to make some sort of a measurement.
You can’t do that with film. So nowadays, something that might have taken eight hours with a glass plate you can do in 20 minutes with a smaller telescope. You can do in five minutes with maybe not quite so small a telescope.
Now that we have electronic ways of measuring things it’s a lot more efficient and also more linear is what we’re finding.
Fraser: What is the state of the art right now? It’s the CCD, right?
Pamela: For doing photos the state of the art is the CCD. In some ways you take your television and you can look and see how many pixels are on it and the number of pixels on your television tells you what the resolution is.
We have pixels on CCDs as well except here instead of giving off light like your television does, they take in the light. Each little pixel detects what it can of the sky and in the good ones, one photon hits them and there is this thing called the photoelectric effect that Einstein came up with.
It says that if you have just the right atom and just the right color of light, the light comes in, hits the atom and an electron goes away. Well, an electron going away is the same thing as current flying.
In an ideal CCD, light comes in, hits the surface of the chip and where it hits triggers an electron to fall out of an atom. The electrons get captured inside the pixel; in these wells as we call them. If the well gets too full it overflows and you end up with a streak across your image.
But if you get your integration time just right you can end up filling each of these wells to a slightly different height. The height that you fill them up with electrons says well this particular well detected a thousand photons.
This one detected 10,000 photons. So you’re able to get a density map of the number of photons hitting your detector which is a brightness map of the sky.
Fraser: Can the detector sense different energies in the photons? Can it distinguish one is a red photon and another is a green photon, or do you have to have a different detector for each color you are trying to detect?
Pamela: The way we normally do it with high resolution work is the detector medium itself has a different, what we call quantum efficiency, as a function of color.
In the red, it might detect 95 percent of the photons. In the blue it might detect 70 percent of the photons that are hitting it. You can’t tell in a picture which one was hit with only blue or this one was hit with only red.
What we do instead is to take a series of images of the exact same object and use filters. A glass filter or some complex media filter in some cases we actually use cells of gas, we put this filter in front of our detector chip.
Light goes through the filter and only the color we want makes it all the way through. Everything else gets scattered off. Let’s say then only the red light hits the detector.
Fraser: Okay, I get it. Instead of trying to have the detector figure out what kinds of light is hitting it, you just block all the light that you don’t want to come through further ahead.
In the end the detector can safely assume that whatever is falling on it is the color light that it’s trying to receive.
Pamela: Yes, and this leads to some really interesting pictures of things like comets that are moving relative to the stars. If you want a color image of the comet, you first take one through a red filter, then through a blue filter, then a green filter, and the stars will be in different places in each of these three filters.
So you get a pretty color comet and then you get these triplets of stars where one star is green, and it’s really the same star, just through three different filters.
Fraser: Right and then they can merge those together. A lot of the pictures that are taken by the orbiters like of Mars, they will take a picture of one color and then a second and third color and that will match the red, green and blue.
That will then be color corrected on computer to make a natural like version of what you might see if you were floating above Mars.
Pamela: You can buy color detectors, but you don’t really want to for a lot of high resolution stuff because each of the different colors takes up physical space. If you have a red, a blue and a green sensor, you’re actually getting one third the resolution on the sky.
It’s better to take the image and repack the pixels in as close as possible, and each pixel is either light or no light, rather than to packing the pixels and has a pixel being red light, green light, blue light.
So you get increased resolution but it takes you a little bit longer on the sky to get all three colors. That’s okay if you get the higher resolution.
Fraser: I know the more expensive video cameras that you can buy will have three CCDs in them. It’s the same process, right?
A single CCD has a lot less resolution and a lot less ability to see all the different colors well. The three CCD cameras are splitting it up and capturing each color into an individual CCD and you get a much better clarity of an image.
Pamela: Yeah, and there unfortunately you have to split the light so when you start dealing with really faint stars, you don’t want to be splitting the light up. It’s all a matter of what you’re trying to do.
How bright is the object you’re looking at and what resolution do you want to use. Different optical engineers have come up with dozens of ways of building these to solve all sorts of different problems.
Fraser: How does the CCD let us record things over time?
Pamela: The basic idea is photon comes in, hits atom, knocks electron out of atom. The electrons get piled up in what we call wells. Then at the end of the exposure we very carefully, and some of the really high sensitivity, low noise ones of these do this very slowly, read out what’s in each pixel one row at a time.
You can think of this as perhaps a football field of people lined up in rows with buckets. The buckets closest to the end zone are all empty. Everyone shifts the contents of their bucket one row. Then you read out what’s in that row. Each time you measure one bucket at a time.
You move everyone one direction and then that last row you read it all out one bucket at a time from left to right. Then you rinse and repeat. Dump everything one bucket over and then read it all out across the row.
Fraser: The CCD saves up the image for the entire duration and at the end…
Pamela: dumps the content. It’s very complicated electronics where they use a lot of very amazingly etched gates to move things very carefully introducing as little noise as possible across the surface.
Really sensitive CCDs we cool down as much as possible because just the thermal noise of having a warm chip will cause the electrons to bump from one well to the other. They are just so full of energy, like boiling water in buckets, that it splashes out.
If your buckets are close enough, that splash can splash into a different well. So, we keep things cool, move things out slowly and carefully and count every bit that we can.
Fraser: How does this affect some of the other wavelengths? We’ve talked about this in terms of visible light; now let’s go sort of on either end of the spectrum. It’s the same technique used for infrared and ultraviolet, right?
Pamela: Right, so in these cases with the photoelectric effect, different atoms are affected by different colors of light. There is a process called doping where you see it inside the silicon on the chip different atoms depending on what colors you want to be sensitive to.
We just create slightly different versions of CCDs to allow us to see in the UV and to allow us to see in the infrared. But in all three cases, we’re using the same basic technology.
It’s when you start getting beyond these more familiar colors that you start getting into some rather strange technologies.
Fraser: What we’ve talked about, the CCD with a set of filters in front of it, with different chemicals on the surface of the CCD, will let you see from the infrared through visible light, through ultraviolet. You can have the same detector able to see all those different colors.
That’s what Hubble does, right? Hubble has one CCD that it can put different filters in front and see the different colors. I know it has a bunch of different instruments that are connected to the telescope.
Pamela: They all use similar technologies at this level.
Fraser: Right.
Pamela: They use a bunch of different filters, a bunch of different similar types of detectors.
Fraser: Let’s go to radio then.
Pamela: Radio works a lot like your satellite dish in your back yard.
Fraser: But I don’t know how my satellite dish works. [Laughter] So that’s not helping.
Pamela: Okay.
Fraser: It’s a dish and it’s kind of looks like a telescope, right? The satellite dish is focusing the radio to a detector. But what is the detector? How does that work? How does it focus? [Laughter] Take those in any order you like.
Pamela: One of the things that always baffles people with radio as compared to CCDs is with CCDs you’re getting the whole swath of the sky all at once and you’re going, “Ooh, pretty galaxy.â€
But with the radio detector, you have what’s called a beam of the sky that you’re able to see. So, you point your detector at something and it basically gathers all the photons from that something and focuses them to a single point that then goes light or no light.
You can only see one pixel at a time so you have to actually scan your detector across the sky looking for radio signals. The detector itself is basically identical to an FM radio. It’s just a lot more sensitive.
Fraser: Now, that’s the same process for radio waves from the longest wavelengths. Is it the same for microwaves as well?
Pamela: Microwaves work the exact same way as well. The only difference is with microwaves you have to make sure that the surface of your detector is super smooth. Like if you dropped a single hair onto a microwave receiver, it would probably be the biggest bump on the receiver.
Whereas once you start getting out to the really long wavelengths, you could basically take the detector and put a dent in it and it would still work just fine. As you get to smaller and smaller wavelengths you have to have a much better configured surface to your reflecting dish. With the longer wavelength, it’s a lot more forgiving.
Fraser: Let’s flip back over and go higher than ultraviolet.
Pamela: As you go to the shorter and shorter wavelengths, the higher and higher frequencies, it gets progressively harder to focus your light. With normal light that we’re used to, visible light, you just pass it through a lens and you just reflect it off a mirror. It’s perfectly happy to go where you want it to.
With x-ray light, the light would rather go through what you’re pointing it at. What you have to do is actually rely on what is called grazing incidents angles where the light just slightly at a couple of degrees just grazes off the edge of a reflecting surface.
You try to funnel as much light as you can from one section of the sky onto your detector, while at the same time locking all the light from the left, right and all around your detector so that you’re not getting hit from the sides.
It is a very complicated system that combines shielding and basically a funnel to get your light to the detector.
It’s again very similar to a CCD except here in order to get everything to work you don’t quite know where the photons are coming from because instead of focusing on them, you are bouncing them in. So what they do is they make what are called shadows.
They take a screen and put what looks to the human eye a random pattern of blocked spaces on it. Light that comes in from one set of angles will cast one type of shadow on the detector while light that comes in from a different set of angles will cast a different type of shadow as it passes through the screen.
With a lot of complex math, they are able to build images of the sky in x-ray by taking apart all of these different shadows. Think of it as an actress on stage illuminated with six different stage lights and she ends up with a star pattern of shadows about her feet.
You can figure out from the six different shadows where the lights have to be. By looking at the shadows from the screen, we can figure out where the x-rays have to be.
Fraser: And that is the kind of detector that is sitting inside the Chandra x-ray observatory or the XMM Newton, right?
Pamela: That’s exactly what we’re using, these grazing instant angle focusing systems. It’s really amazing what they’ve figured out how to do.
Fraser: Because the photons are so high energy, it’s just harder and harder to control them and focus them.
Pamela: And harder to prevent them from doing things you don’t want them to do. Gamma rays make it even harder. Gamma rays really just want to go through you. They really just don’t care, they want to go straight through the detector and keep going.
The way we end up detecting gamma rays in a lot of cases is through what you call scintillation. You take some sort of material that you’re fairly certain is going to stop a gamma ray, and hopefully some sort of material that when it stops the gamma ray is also going to give off a flicker of light that is easy to detect.
Different scintillation materials are used and these different materials, including our atmosphere in some cases, whenever they are hit with gamma rays give off normal light and we detect the normal light using all the other types of technologies that I’ve talked about so far.
It sounds convoluted and it is. You’re basically taking gamma ray light and transforming it into something else to make it detectable. It also means that we aren’t always sure where the gamma rays are coming from.
What you end up doing to try and figure out where a gamma ray source came from is you point to your detector (ala bucket), at a large section of the sky.
Then block everything coming in from the sides as best you can, and then hope you gather some gamma rays inside your bucket.
When they trigger the scintillation material and you detect a flicker, you instantly look at that part of the sky in x-rays and other types of light and hope that whatever was giving off the gamma rays is also giving off light in other colors.
Fraser: That lets you verify that’s where it was coming from.
Pamela: Right. You might get this giant five degree or more swath of the sky, something ten times bigger than the moon on the sky and know that somewhere in that large area there was a gamma ray burst.
Then you go and look instead with the x-rays which we can focus a bit better and once you narrow down where it was with the x-rays, then you go look with the visible light. Then you know bang, I know exactly where it is located on the sky.
Fraser: Is there a way once you’ve detected it you can continually watch it with the gamma ray?
Pamela: With the gamma ray bursts, they last at most a hundred seconds or so. There have been a few exceptions, but on average they’re only a few seconds long. So you could continue looking there, but you won’t gain a lot more information.
We do use what’s called spectroscopy to say this gamma ray photon was more energetic than this other one. We try to get some extra information on what specific colors of gamma ray are coming out; what specific colors of x-ray are coming out, but we have no positional information from those.
Fraser: Now we have a couple of detectors which are outside of the spectrum. One is pronounced Cherenkov radiation. Is that right?
Pamela: Yes.
Fraser: That’s like another way to look at gamma rays.
Pamela: Cherenkov radiation actually comes from cosmic rays. You get this high energy proton that’s coming from somewhere in the universe, we don’t always know where. It could be coming from a black hole that’s feeding on something. It could be coming from an exploding star.
Something got this proton going really fast. As it hits the Earth’s atmosphere, it’s going faster than light can go in the atmosphere. That sounds kind of warped because really nothing is supposed to be able to travel faster than the speed of light.
But it’s the speed of light in a given medium. Light has this maximum speed that it can attain and that’s given in a vacuum. In all sorts of different things light travels at different speeds. In fact in some medium, you can actually get light going slower than you can walk.
You can imagine this proton chugging its way across the universe hits the atmosphere and in the atmosphere light doesn’t go as fast as it does in a vacuum.
The proton is now going faster than the light and this leads to really cool blue glow basically as the proton passes through the medium and it gives off all sorts of little radiation bits.
We detect that with different detectors by just looking for the right color of light to be given off. It’s basically a proton breaking and the energy has to somewhere and the somewhere that it goes is into light.
Fraser: I guess the last one we want to talk about is detecting the undetectable which is neutrinos.
Pamela: They’re not totally undetectable.
Fraser: Yeah but can’t they move through a light year’s worth of solid lead and not bump into anything? [Laughter] That seems pretty undetectable to me.
Pamela: Yeah, it’s all a matter of statistics. Given any one single neutrino, it will just keep going. But if you send enough neutrinos through basically a giant tank of cleaning solution, because neutrinos will, given the opportunity, sometimes very rarely, react with chlorine. When you send them into this material, the neutrinos will occasionally interact and give off a bit of light.
We look for that light. Based on every one detection we get we know there must have been a whole bunch of other neutrinos that we didn’t detect. It’s all a matter of statistics. Since we know roughly how big we think neutrinos are and we know roughly how big the atoms in the solution are, we can figure out crossing times. We can figure out cross-sections.
One way to think of it is if you have maybe 30 human beings scattered around a big auditorium and you start throwing tennis balls. Yes, the room is mostly empty, but eventually one of those tennis balls is going to hit a person and you can figure out by knowing how big a person is and how big a tennis ball is, what the probability of the tennis ball hitting the human is and how many tennis balls probably flew without hitting the person.
Fraser: I think that covers all of the methods of detection. I think when you hear about how astronomers are building up their pictures and doing their research, hopefully that will help give you a little more insight. Thanks a lot Pamela.
Pamela: It’s been my pleasure.
This transcript is not an exact match to the audio file. It has been edited for clarity. Transcription and editing by Cindy Leonard.
Dr. Pamela Gay: Oh, it was a wonderful trip and next time you have to go with me Fraser.
Fraser: Yeah, no problem [Laughter]. So, where did you go? What did you do?
Pamela: I went to Munich and I walked all over the city because I discovered nothing is open on Sundays so I just walked. It was pretty, lots of cool stuff.
Fraser: Munich is a great city I really liked it.
Pamela: I visited the European Space Agency/the European Southern Observatory facility unit or Garsching. I then visited the wonderful Dr. Chris Lintott at Oxford and we plotted to do things for the International Year of Astronomy.
Then I went to Cambridge for a joint meeting of the British Astronomical Association and the American Association of Variable Star Observers.
I also happened to meet Chris Smith, the Naked Scientist himself. Lots of great people, many of them named Chris.
Fraser: You were interviewed on his show, right?
Pamela: It should be there when you hear this. You should be able to click over to the Naked Scientist and hear me talking a bit about my research and one of the amateurs that takes data for me and points out really cool stars to me was interviewed as well.
Fraser: Awesome. Well we’ll try to find that link to it from the show notes.
This week, our senses can only detect a fraction of the phenomena happening in the Universe. That’s why Scientists and Engineers developed detectors to let us see radiation and particles that we could never detect with our eyes and ears. This week we’ll go through them so you can understand how we what we can’t see.
Where do you want to start Pamela?
Pamela: Why don’t we start with the most commonly used detector of them all, the human eyeball?
Fraser: Perfect. So let’s talk about human eyes, I have two.
Pamela: Okay, so the human eyeball is sensitive in a bunch of different ways. There is first of all, the day-to-day color vision which is the way we think most of the time.
We can see from about 400 nanometers which is blue, to 700 nanometers which is red.
Fraser: That’s the wavelength of the light.
Pamela: Yes. That’s the wavelength of the light.
Fraser: Four hundred up and downs per nanometer, is that right?
Pamela: Yes, peak to peak that’s the separation between the peaks in the wavelength.
Fraser: Right.
Pamela: And to get color vision, we actually have three different cone cells in the eye that are sensitive to different sets of wavelengths. Where one sees blue another sees green and another sees red. Our brain is able to sort out these three different sets of input and detect a whole lot of red photons and a few blue photons and a few green photons and translate that into a color.
The catch is, since each particular type of cell is only sensitive to one of these three bands of color, we can only trigger our eyes on certain colors. In really, really low light conditions there may not be enough photons of any one color to trigger our eyes to see something.
To make up for that, because human beings don’t want to be eaten by things in the woods after dark, our eyes, particularly in the peripheral vision have these things called rods. Rods are extremely insensitive to red.
With other colors, blues, greens, yellows and things like that they simply recognize light, no light. In this way we’re able to see very faint objects at night that aren’t red.
This is somewhat weird if you look at rose bushes during daylight where the red flower stands out. Then at night where all you see is the green light.
Fraser: Right. It’s almost like your vision turns black and white at night. You can see enough to not bump into things but you can’t make out really sensitive color differences.
Pamela: Also, one of the weird things because these are all chemical reactions, is your eyes actually take time to adjust to the darkness. It takes time for them to fully dilate and it takes time for extra chemicals to build up in your eyes such that if you do get a burst of light, they trigger.
Because of this at night, you only want to use red lights which the color sensitive parts of your eyes will trigger on but the black and white sensitive parts of your eyes will utterly ignore.
Fraser: I have a flashlight that I can put a red filter over the front of it that when I’m outside looking at star charts, it doesn’t ruin my night vision.
Pamela: This is the same reason that brake lights on the backs of cars are red. It’s the same reason that under war conditions ships go to red alert and everything goes to red, and you can dark adapt so if something bad happens and all the lights go out, you can still see.
Fraser: I didn’t know that. That’s where red alert comes from.
Pamela: Right, that’s where red alert comes from. It’s a way of working to protect your night vision; protect your low light vision. Once you get that low light vision all set up and you’re ready to go, the human eye has an amazing dynamic range.
We can go from basically being able to trigger on as few as five to ten photons that arrive within about a hundred milliseconds of one another, to being able to see something 250 times brighter than that before it starts to do bad things to our eyeballs.
We have a factor of 250 between looking at nice bright star on the sky say Sirius to something 250 times fainter than that. That’s kind of cool to think about.
Fraser: Where does our eye fall down?
Pamela: Well, we are limited in the colors that we can see. Once you start using cameras and photoelectric detectors, you can get more than a 250 factor between bright and faint objects.
While we have a pretty good dynamic range and while we can see from blue to red quite happily, it would be kind of nice if we could see in radio light or infrared light. Maybe not day to day, but scientifically it would be nice to be able to see in these other colors.
X-ray is just another color, but not one the human eye can see. To start to see these other things and to start to be able to see fainter things or record what we see, we need to start moving to detectors.
Fraser: Right, the recording is the problem. You may get a handful full of photons fall on your eyes but it’s just a snapshot in time. Your brain says, well not enough, I’m not going to see anything.
But, if you could look at something that over hours and weeks the photons keep falling, you could eventually see something important there. But we just can’t record anything so it’s just all thrown away.
Pamela: Not only that, but the human eye is not a perfect recorder. If you’re out on a really bright day and you look out into a blue sky, you may see these weird things swimming through your vision called floaters.
You may, when you are at the eye doctor and he shines the bright light in your eye see actually the back of your eye get reflected around such that you can see it yourself. All these different things can crop up in our astronomical sketches and we might think they’re actually part of what we’re looking at.
The canals on Mars weren’t actually there, they were stretching the human vision beyond what it was designed to do. So we have to be aware of our biological defects.
Fraser: The astronomical detector equivalent of the eye is the telescope?
Pamela: The telescope can be used with the human eye. The astronomical detector that is perhaps the modern day equivalent is the CCD array. It used to be cameras.
Fraser: Right, or film, okay.
Pamela: We went from originally using glass plates. Glass plates were a bit evil, because first of all you had to develop them and anyone who has ever developed film knows some days are better than others.
You also had to pre-illuminate them and all sorts of scary things and once you’d done all of that, they weren’t necessarily linear detectors. So, there might be a factor of two brightness between 2 objects and there might be a factor of 3 brightness between one of those and a third one, but you don’t see these factors when you look at it. You see one as being two and a half times brighter.
This nonlinearity, the numbers don’t line up equally separated on the ruler problem can make it hard to analyze what you’re taking pictures of.
Fraser: Right it’s super hard. I don’t spend time in a dark room, but it’s really hard to get the contrast the same from picture to picture when you’re just working to develop it.
Pamela: And the chemicals don’t trigger linearly across the faintest objects and the brightest objects. It’s not the most sensitive thing in the world either. In an ideal situation, for every one photon that hits whatever your detector is you’ll be able to make some sort of a measurement.
You can’t do that with film. So nowadays, something that might have taken eight hours with a glass plate you can do in 20 minutes with a smaller telescope. You can do in five minutes with maybe not quite so small a telescope.
Now that we have electronic ways of measuring things it’s a lot more efficient and also more linear is what we’re finding.
Fraser: What is the state of the art right now? It’s the CCD, right?
Pamela: For doing photos the state of the art is the CCD. In some ways you take your television and you can look and see how many pixels are on it and the number of pixels on your television tells you what the resolution is.
We have pixels on CCDs as well except here instead of giving off light like your television does, they take in the light. Each little pixel detects what it can of the sky and in the good ones, one photon hits them and there is this thing called the photoelectric effect that Einstein came up with.
It says that if you have just the right atom and just the right color of light, the light comes in, hits the atom and an electron goes away. Well, an electron going away is the same thing as current flying.
In an ideal CCD, light comes in, hits the surface of the chip and where it hits triggers an electron to fall out of an atom. The electrons get captured inside the pixel; in these wells as we call them. If the well gets too full it overflows and you end up with a streak across your image.
But if you get your integration time just right you can end up filling each of these wells to a slightly different height. The height that you fill them up with electrons says well this particular well detected a thousand photons.
This one detected 10,000 photons. So you’re able to get a density map of the number of photons hitting your detector which is a brightness map of the sky.
Fraser: Can the detector sense different energies in the photons? Can it distinguish one is a red photon and another is a green photon, or do you have to have a different detector for each color you are trying to detect?
Pamela: The way we normally do it with high resolution work is the detector medium itself has a different, what we call quantum efficiency, as a function of color.
In the red, it might detect 95 percent of the photons. In the blue it might detect 70 percent of the photons that are hitting it. You can’t tell in a picture which one was hit with only blue or this one was hit with only red.
What we do instead is to take a series of images of the exact same object and use filters. A glass filter or some complex media filter in some cases we actually use cells of gas, we put this filter in front of our detector chip.
Light goes through the filter and only the color we want makes it all the way through. Everything else gets scattered off. Let’s say then only the red light hits the detector.
Fraser: Okay, I get it. Instead of trying to have the detector figure out what kinds of light is hitting it, you just block all the light that you don’t want to come through further ahead.
In the end the detector can safely assume that whatever is falling on it is the color light that it’s trying to receive.
Pamela: Yes, and this leads to some really interesting pictures of things like comets that are moving relative to the stars. If you want a color image of the comet, you first take one through a red filter, then through a blue filter, then a green filter, and the stars will be in different places in each of these three filters.
So you get a pretty color comet and then you get these triplets of stars where one star is green, and it’s really the same star, just through three different filters.
Fraser: Right and then they can merge those together. A lot of the pictures that are taken by the orbiters like of Mars, they will take a picture of one color and then a second and third color and that will match the red, green and blue.
That will then be color corrected on computer to make a natural like version of what you might see if you were floating above Mars.
Pamela: You can buy color detectors, but you don’t really want to for a lot of high resolution stuff because each of the different colors takes up physical space. If you have a red, a blue and a green sensor, you’re actually getting one third the resolution on the sky.
It’s better to take the image and repack the pixels in as close as possible, and each pixel is either light or no light, rather than to packing the pixels and has a pixel being red light, green light, blue light.
So you get increased resolution but it takes you a little bit longer on the sky to get all three colors. That’s okay if you get the higher resolution.
Fraser: I know the more expensive video cameras that you can buy will have three CCDs in them. It’s the same process, right?
A single CCD has a lot less resolution and a lot less ability to see all the different colors well. The three CCD cameras are splitting it up and capturing each color into an individual CCD and you get a much better clarity of an image.
Pamela: Yeah, and there unfortunately you have to split the light so when you start dealing with really faint stars, you don’t want to be splitting the light up. It’s all a matter of what you’re trying to do.
How bright is the object you’re looking at and what resolution do you want to use. Different optical engineers have come up with dozens of ways of building these to solve all sorts of different problems.
Fraser: How does the CCD let us record things over time?
Pamela: The basic idea is photon comes in, hits atom, knocks electron out of atom. The electrons get piled up in what we call wells. Then at the end of the exposure we very carefully, and some of the really high sensitivity, low noise ones of these do this very slowly, read out what’s in each pixel one row at a time.
You can think of this as perhaps a football field of people lined up in rows with buckets. The buckets closest to the end zone are all empty. Everyone shifts the contents of their bucket one row. Then you read out what’s in that row. Each time you measure one bucket at a time.
You move everyone one direction and then that last row you read it all out one bucket at a time from left to right. Then you rinse and repeat. Dump everything one bucket over and then read it all out across the row.
Fraser: The CCD saves up the image for the entire duration and at the end…
Pamela: dumps the content. It’s very complicated electronics where they use a lot of very amazingly etched gates to move things very carefully introducing as little noise as possible across the surface.
Really sensitive CCDs we cool down as much as possible because just the thermal noise of having a warm chip will cause the electrons to bump from one well to the other. They are just so full of energy, like boiling water in buckets, that it splashes out.
If your buckets are close enough, that splash can splash into a different well. So, we keep things cool, move things out slowly and carefully and count every bit that we can.
Fraser: How does this affect some of the other wavelengths? We’ve talked about this in terms of visible light; now let’s go sort of on either end of the spectrum. It’s the same technique used for infrared and ultraviolet, right?
Pamela: Right, so in these cases with the photoelectric effect, different atoms are affected by different colors of light. There is a process called doping where you see it inside the silicon on the chip different atoms depending on what colors you want to be sensitive to.
We just create slightly different versions of CCDs to allow us to see in the UV and to allow us to see in the infrared. But in all three cases, we’re using the same basic technology.
It’s when you start getting beyond these more familiar colors that you start getting into some rather strange technologies.
Fraser: What we’ve talked about, the CCD with a set of filters in front of it, with different chemicals on the surface of the CCD, will let you see from the infrared through visible light, through ultraviolet. You can have the same detector able to see all those different colors.
That’s what Hubble does, right? Hubble has one CCD that it can put different filters in front and see the different colors. I know it has a bunch of different instruments that are connected to the telescope.
Pamela: They all use similar technologies at this level.
Fraser: Right.
Pamela: They use a bunch of different filters, a bunch of different similar types of detectors.
Fraser: Let’s go to radio then.
Pamela: Radio works a lot like your satellite dish in your back yard.
Fraser: But I don’t know how my satellite dish works. [Laughter] So that’s not helping.
Pamela: Okay.
Fraser: It’s a dish and it’s kind of looks like a telescope, right? The satellite dish is focusing the radio to a detector. But what is the detector? How does that work? How does it focus? [Laughter] Take those in any order you like.
Pamela: One of the things that always baffles people with radio as compared to CCDs is with CCDs you’re getting the whole swath of the sky all at once and you’re going, “Ooh, pretty galaxy.â€
But with the radio detector, you have what’s called a beam of the sky that you’re able to see. So, you point your detector at something and it basically gathers all the photons from that something and focuses them to a single point that then goes light or no light.
You can only see one pixel at a time so you have to actually scan your detector across the sky looking for radio signals. The detector itself is basically identical to an FM radio. It’s just a lot more sensitive.
Fraser: Now, that’s the same process for radio waves from the longest wavelengths. Is it the same for microwaves as well?
Pamela: Microwaves work the exact same way as well. The only difference is with microwaves you have to make sure that the surface of your detector is super smooth. Like if you dropped a single hair onto a microwave receiver, it would probably be the biggest bump on the receiver.
Whereas once you start getting out to the really long wavelengths, you could basically take the detector and put a dent in it and it would still work just fine. As you get to smaller and smaller wavelengths you have to have a much better configured surface to your reflecting dish. With the longer wavelength, it’s a lot more forgiving.
Fraser: Let’s flip back over and go higher than ultraviolet.
Pamela: As you go to the shorter and shorter wavelengths, the higher and higher frequencies, it gets progressively harder to focus your light. With normal light that we’re used to, visible light, you just pass it through a lens and you just reflect it off a mirror. It’s perfectly happy to go where you want it to.
With x-ray light, the light would rather go through what you’re pointing it at. What you have to do is actually rely on what is called grazing incidents angles where the light just slightly at a couple of degrees just grazes off the edge of a reflecting surface.
You try to funnel as much light as you can from one section of the sky onto your detector, while at the same time locking all the light from the left, right and all around your detector so that you’re not getting hit from the sides.
It is a very complicated system that combines shielding and basically a funnel to get your light to the detector.
It’s again very similar to a CCD except here in order to get everything to work you don’t quite know where the photons are coming from because instead of focusing on them, you are bouncing them in. So what they do is they make what are called shadows.
They take a screen and put what looks to the human eye a random pattern of blocked spaces on it. Light that comes in from one set of angles will cast one type of shadow on the detector while light that comes in from a different set of angles will cast a different type of shadow as it passes through the screen.
With a lot of complex math, they are able to build images of the sky in x-ray by taking apart all of these different shadows. Think of it as an actress on stage illuminated with six different stage lights and she ends up with a star pattern of shadows about her feet.
You can figure out from the six different shadows where the lights have to be. By looking at the shadows from the screen, we can figure out where the x-rays have to be.
Fraser: And that is the kind of detector that is sitting inside the Chandra x-ray observatory or the XMM Newton, right?
Pamela: That’s exactly what we’re using, these grazing instant angle focusing systems. It’s really amazing what they’ve figured out how to do.
Fraser: Because the photons are so high energy, it’s just harder and harder to control them and focus them.
Pamela: And harder to prevent them from doing things you don’t want them to do. Gamma rays make it even harder. Gamma rays really just want to go through you. They really just don’t care, they want to go straight through the detector and keep going.
The way we end up detecting gamma rays in a lot of cases is through what you call scintillation. You take some sort of material that you’re fairly certain is going to stop a gamma ray, and hopefully some sort of material that when it stops the gamma ray is also going to give off a flicker of light that is easy to detect.
Different scintillation materials are used and these different materials, including our atmosphere in some cases, whenever they are hit with gamma rays give off normal light and we detect the normal light using all the other types of technologies that I’ve talked about so far.
It sounds convoluted and it is. You’re basically taking gamma ray light and transforming it into something else to make it detectable. It also means that we aren’t always sure where the gamma rays are coming from.
What you end up doing to try and figure out where a gamma ray source came from is you point to your detector (ala bucket), at a large section of the sky.
Then block everything coming in from the sides as best you can, and then hope you gather some gamma rays inside your bucket.
When they trigger the scintillation material and you detect a flicker, you instantly look at that part of the sky in x-rays and other types of light and hope that whatever was giving off the gamma rays is also giving off light in other colors.
Fraser: That lets you verify that’s where it was coming from.
Pamela: Right. You might get this giant five degree or more swath of the sky, something ten times bigger than the moon on the sky and know that somewhere in that large area there was a gamma ray burst.
Then you go and look instead with the x-rays which we can focus a bit better and once you narrow down where it was with the x-rays, then you go look with the visible light. Then you know bang, I know exactly where it is located on the sky.
Fraser: Is there a way once you’ve detected it you can continually watch it with the gamma ray?
Pamela: With the gamma ray bursts, they last at most a hundred seconds or so. There have been a few exceptions, but on average they’re only a few seconds long. So you could continue looking there, but you won’t gain a lot more information.
We do use what’s called spectroscopy to say this gamma ray photon was more energetic than this other one. We try to get some extra information on what specific colors of gamma ray are coming out; what specific colors of x-ray are coming out, but we have no positional information from those.
Fraser: Now we have a couple of detectors which are outside of the spectrum. One is pronounced Cherenkov radiation. Is that right?
Pamela: Yes.
Fraser: That’s like another way to look at gamma rays.
Pamela: Cherenkov radiation actually comes from cosmic rays. You get this high energy proton that’s coming from somewhere in the universe, we don’t always know where. It could be coming from a black hole that’s feeding on something. It could be coming from an exploding star.
Something got this proton going really fast. As it hits the Earth’s atmosphere, it’s going faster than light can go in the atmosphere. That sounds kind of warped because really nothing is supposed to be able to travel faster than the speed of light.
But it’s the speed of light in a given medium. Light has this maximum speed that it can attain and that’s given in a vacuum. In all sorts of different things light travels at different speeds. In fact in some medium, you can actually get light going slower than you can walk.
You can imagine this proton chugging its way across the universe hits the atmosphere and in the atmosphere light doesn’t go as fast as it does in a vacuum.
The proton is now going faster than the light and this leads to really cool blue glow basically as the proton passes through the medium and it gives off all sorts of little radiation bits.
We detect that with different detectors by just looking for the right color of light to be given off. It’s basically a proton breaking and the energy has to somewhere and the somewhere that it goes is into light.
Fraser: I guess the last one we want to talk about is detecting the undetectable which is neutrinos.
Pamela: They’re not totally undetectable.
Fraser: Yeah but can’t they move through a light year’s worth of solid lead and not bump into anything? [Laughter] That seems pretty undetectable to me.
Pamela: Yeah, it’s all a matter of statistics. Given any one single neutrino, it will just keep going. But if you send enough neutrinos through basically a giant tank of cleaning solution, because neutrinos will, given the opportunity, sometimes very rarely, react with chlorine. When you send them into this material, the neutrinos will occasionally interact and give off a bit of light.
We look for that light. Based on every one detection we get we know there must have been a whole bunch of other neutrinos that we didn’t detect. It’s all a matter of statistics. Since we know roughly how big we think neutrinos are and we know roughly how big the atoms in the solution are, we can figure out crossing times. We can figure out cross-sections.
One way to think of it is if you have maybe 30 human beings scattered around a big auditorium and you start throwing tennis balls. Yes, the room is mostly empty, but eventually one of those tennis balls is going to hit a person and you can figure out by knowing how big a person is and how big a tennis ball is, what the probability of the tennis ball hitting the human is and how many tennis balls probably flew without hitting the person.
Fraser: I think that covers all of the methods of detection. I think when you hear about how astronomers are building up their pictures and doing their research, hopefully that will help give you a little more insight. Thanks a lot Pamela.
Pamela: It’s been my pleasure.
This transcript is not an exact match to the audio file. It has been edited for clarity. Transcription and editing by Cindy Leonard.