Horological super nerds, help a watch simp out.

Ran across this article in my news feed:

https://www.sciencealert.com/scientists-found-an-entirely-new-way-of-measuring-time

My pea-brain is still trying to make sense of it, but can someone help me through some of the basics like I’m an eight year old?

Essentially from what I can gather, you take Helium atoms and blast them with lasers to over excite the electrons, putting them in a Rydberg state (a quantum state?). These Rydberg atoms now produce waves, and a bunch of them together produce wave packets, and having a bunch of wave packets together creates interference, very specific, unique patterns of ripples of interference referred to as fingerprints when pictured and measured by lasers yet again. So by measuring these fingerprints in the atomic pool, one now has a reference of time?

And if you look at certain Rydberg signatures you’re observing different increments of time? From what they’re quoting they observed timestamps for events of: 1.7 trillionths of a second, or 1.7 picoseconds, or about how long it takes for light in a vacuum to travel 0.5mm.

What is knowing this timing now good for? They quickly mentioned it could be beneficial in supercomputing, but how so?

Thank you, Crunch on.

Reply
·

Okay, you're talking to a liberal arts type here, but I think it intuitively makes sense that being able to measure increasingly smaller increments of time has extremely important applications to traditional super computing.

If you think of normal computing, everything basically comes down to a switch: 0 or 1. All of hardware comes down to increasingly complex chains of these switches to eventually enable you to do what you want to do. The smaller the increment of time you can measure, the faster you can flip the switch on and off. The faster you can flip the switch on and off, the more processing power you can ultimately derive from a given system (say 1 million on-off commands in a given time frame versus 2 million). This has obvious advantages.

Then there are interesting applications in quantum computing. Unlike in a traditional circuit system, the switch doesn't have an on-off. Rather, it exists in a complex state where it could hypothetically be a 0 and a 1 at the same time. This complex state is called a qubit.

This qubit state requires very specific inputs. In this case, the ability to not only accurately measure time (as in a second measured by this method is accurate to a second using the internationally accepted definition for a second) but precisely measure time (as in the amount of time measured by vibrating helium atoms is the same each time) could be the make or break for quantum computing, because quantum computing requires extremely precise timing in order to recreate specific conditions. The more qubits you put into a system, the more complex and precise the conditions required to replicate a certain series of calculations (and the greater precision and accuracy needed in time keeping in order have that system continue to work). However, the more qubits you have in a system, the exponentially greater the amount of processing power you have.

The real world applications for this sort of stuff are super cool. For example, when a drug company could develop a drug from scratch and simulate to the smallest degree it's impact on the target disease and the human body, all without human trials or the guessing game of chemical reactions. Because they'd finally have the processing power to simulate everything about the human body, the target disease, and the chemical reaction.

True mega nerds should weigh in in case I've flubbed the engineering, but that's my layman understanding.

·

Standards-setters are always trying to identify ways to make measurements more precise and more consistent. If this method of measuring "one second" produces more precise, consistent results than our current method, then it is (arguably) superior. This would help with GPS, satellites, computers, etc. that require a high degree of precision and consistency in measurement. Currently, a second is defined as the interval of 9,192,631770 hyperfine transitions of the valence electron in an undisturbed cesium 133 atom. If this new method is either more consistent or more precise, then it would be superior. I think it could also be helpful in measuring events that occur in extremely brief timescales, like particle decay following events in the CERN super collider.

Anyway, that's just my top of mind thoughts.

·

In a previous post, I linked to this video that I thought was interesting. It does a good job of explaining how we've progressed at measuring time and why it is important.

https://youtu.be/mg9yc7_7BWc

The video mentions leap seconds. There is currently a campaign underway to stop adjusting for leap seconds...

https://www.timeanddate.com/news/astronomy/end-of-leap-seconds-2022

·

That will be crucial in getting a fat research grant to study that to death

·

Wow.

Informative posts. 🙏

·

I don't care how you measure time if my wife is still late it had no impact.

·
Edge168n

Okay, you're talking to a liberal arts type here, but I think it intuitively makes sense that being able to measure increasingly smaller increments of time has extremely important applications to traditional super computing.

If you think of normal computing, everything basically comes down to a switch: 0 or 1. All of hardware comes down to increasingly complex chains of these switches to eventually enable you to do what you want to do. The smaller the increment of time you can measure, the faster you can flip the switch on and off. The faster you can flip the switch on and off, the more processing power you can ultimately derive from a given system (say 1 million on-off commands in a given time frame versus 2 million). This has obvious advantages.

Then there are interesting applications in quantum computing. Unlike in a traditional circuit system, the switch doesn't have an on-off. Rather, it exists in a complex state where it could hypothetically be a 0 and a 1 at the same time. This complex state is called a qubit.

This qubit state requires very specific inputs. In this case, the ability to not only accurately measure time (as in a second measured by this method is accurate to a second using the internationally accepted definition for a second) but precisely measure time (as in the amount of time measured by vibrating helium atoms is the same each time) could be the make or break for quantum computing, because quantum computing requires extremely precise timing in order to recreate specific conditions. The more qubits you put into a system, the more complex and precise the conditions required to replicate a certain series of calculations (and the greater precision and accuracy needed in time keeping in order have that system continue to work). However, the more qubits you have in a system, the exponentially greater the amount of processing power you have.

The real world applications for this sort of stuff are super cool. For example, when a drug company could develop a drug from scratch and simulate to the smallest degree it's impact on the target disease and the human body, all without human trials or the guessing game of chemical reactions. Because they'd finally have the processing power to simulate everything about the human body, the target disease, and the chemical reaction.

True mega nerds should weigh in in case I've flubbed the engineering, but that's my layman understanding.

Wow! I was trained in physics, and I could not, for the life of me, have put it any better than you have there!

·
Mr.Dee.Bater

Wow! I was trained in physics, and I could not, for the life of me, have put it any better than you have there!

Liberal arts don't fail me now! I'm a secret nerd that just loves this bleeding edge of science shit.

·
Edge168n

Okay, you're talking to a liberal arts type here, but I think it intuitively makes sense that being able to measure increasingly smaller increments of time has extremely important applications to traditional super computing.

If you think of normal computing, everything basically comes down to a switch: 0 or 1. All of hardware comes down to increasingly complex chains of these switches to eventually enable you to do what you want to do. The smaller the increment of time you can measure, the faster you can flip the switch on and off. The faster you can flip the switch on and off, the more processing power you can ultimately derive from a given system (say 1 million on-off commands in a given time frame versus 2 million). This has obvious advantages.

Then there are interesting applications in quantum computing. Unlike in a traditional circuit system, the switch doesn't have an on-off. Rather, it exists in a complex state where it could hypothetically be a 0 and a 1 at the same time. This complex state is called a qubit.

This qubit state requires very specific inputs. In this case, the ability to not only accurately measure time (as in a second measured by this method is accurate to a second using the internationally accepted definition for a second) but precisely measure time (as in the amount of time measured by vibrating helium atoms is the same each time) could be the make or break for quantum computing, because quantum computing requires extremely precise timing in order to recreate specific conditions. The more qubits you put into a system, the more complex and precise the conditions required to replicate a certain series of calculations (and the greater precision and accuracy needed in time keeping in order have that system continue to work). However, the more qubits you have in a system, the exponentially greater the amount of processing power you have.

The real world applications for this sort of stuff are super cool. For example, when a drug company could develop a drug from scratch and simulate to the smallest degree it's impact on the target disease and the human body, all without human trials or the guessing game of chemical reactions. Because they'd finally have the processing power to simulate everything about the human body, the target disease, and the chemical reaction.

True mega nerds should weigh in in case I've flubbed the engineering, but that's my layman understanding.

Aaaah qubits…my old nemesis. Every time I think of the fuckers I have to go take a quiet moment to myself afterwards to calm down. Never much made sense to me, except sometimes when I think of qubits as occupying not a third state but a hyper-bridge state.

Thank you very much for helping clear that up a little more for me, definitely helps me remember and realize the importance of timing accuracy and precision for computing.

·
caktaylor

In a previous post, I linked to this video that I thought was interesting. It does a good job of explaining how we've progressed at measuring time and why it is important.

https://youtu.be/mg9yc7_7BWc

The video mentions leap seconds. There is currently a campaign underway to stop adjusting for leap seconds...

https://www.timeanddate.com/news/astronomy/end-of-leap-seconds-2022

This helped, and hurt, thank you, dammit.

No, I do love this a lot, and definitely advanced me ever so incrementally forward in understanding time…whatever little of it I thought I understood.

·

The faster the oscillation the more accurate. High beat movements like 10 beats a second ( Zenith ) vs 4 beats ( lower end movements ) make fir more accurate “ second” intervals. So stepped up to 1.7 Picoseconds means bloody accurate “ second “ intervals . Loose a second every billion years maybe ( now I’m guessing )

·
caktaylor

Standards-setters are always trying to identify ways to make measurements more precise and more consistent. If this method of measuring "one second" produces more precise, consistent results than our current method, then it is (arguably) superior. This would help with GPS, satellites, computers, etc. that require a high degree of precision and consistency in measurement. Currently, a second is defined as the interval of 9,192,631770 hyperfine transitions of the valence electron in an undisturbed cesium 133 atom. If this new method is either more consistent or more precise, then it would be superior. I think it could also be helpful in measuring events that occur in extremely brief timescales, like particle decay following events in the CERN super collider.

Anyway, that's just my top of mind thoughts.

I explained like to an eight year old but brilliantly put :)

·
Edge168n

Okay, you're talking to a liberal arts type here, but I think it intuitively makes sense that being able to measure increasingly smaller increments of time has extremely important applications to traditional super computing.

If you think of normal computing, everything basically comes down to a switch: 0 or 1. All of hardware comes down to increasingly complex chains of these switches to eventually enable you to do what you want to do. The smaller the increment of time you can measure, the faster you can flip the switch on and off. The faster you can flip the switch on and off, the more processing power you can ultimately derive from a given system (say 1 million on-off commands in a given time frame versus 2 million). This has obvious advantages.

Then there are interesting applications in quantum computing. Unlike in a traditional circuit system, the switch doesn't have an on-off. Rather, it exists in a complex state where it could hypothetically be a 0 and a 1 at the same time. This complex state is called a qubit.

This qubit state requires very specific inputs. In this case, the ability to not only accurately measure time (as in a second measured by this method is accurate to a second using the internationally accepted definition for a second) but precisely measure time (as in the amount of time measured by vibrating helium atoms is the same each time) could be the make or break for quantum computing, because quantum computing requires extremely precise timing in order to recreate specific conditions. The more qubits you put into a system, the more complex and precise the conditions required to replicate a certain series of calculations (and the greater precision and accuracy needed in time keeping in order have that system continue to work). However, the more qubits you have in a system, the exponentially greater the amount of processing power you have.

The real world applications for this sort of stuff are super cool. For example, when a drug company could develop a drug from scratch and simulate to the smallest degree it's impact on the target disease and the human body, all without human trials or the guessing game of chemical reactions. Because they'd finally have the processing power to simulate everything about the human body, the target disease, and the chemical reaction.

True mega nerds should weigh in in case I've flubbed the engineering, but that's my layman understanding.

I explained it like to an eight year old ! You have my 61 year old brain frothing ! Bloody well explained . The quantum analogues in computing like the chemical on offs of our 4 proteins with light makes quantum computing so exciting. I tried an explanation to an eight year old , yours is brilliant!

·

Hey, does Greenwich know about this? 😂