YTread Logo
YTread Logo

Panel Discussion: Beyond Moore's Law

Jun 06, 2021
Alright, I'm going to go ahead and start here, so welcome to the final

panel

of the day at the last conference. This is scheduled on the paper schedule, it's a one hour

panel

, but in fact we have an hour and a half. assigned half, we will not make you late for dinner, but we are creating additional time for you to ask questions of this distinguished group of panelists and we are eager to have a good

discussion

for the second half, so this candle is about the use se is about what we're going to do beyond Moore's Law and this is the time history of Moore's Law here now until about 2025, getting to five nanometers, we've come to expect exponential improvements in efficiency integration density. energetic and even at one point, a higher performance with each generation every 18 months to two years and it is the ingenuity that I think I understand that this phrase means the art of doing it in Italian to allow us to maintain this progression, but in 2025 to 2030 o So this is all going to run out and the question is what do we do next because we have to start planning for it now so you know there have been 40 years of semiconductor scaling, not really 40 years, it's actually 50 years every 10 years. someone says that Moore's Law is going to end and it's gotten to the point where Alan Kay said I predict that Moore's Law will never end because then I'll only be wrong once, so why do we believe that Moore's Law in these technological improvements is in fact, this is going to end because of the way Gordon Moore stood in front of his graph showing the economic theory that predicted this doubling and the answer is that as we have been shrinking, Moore's Law has been mainly driven by lithography, photolithography processes.
panel discussion beyond moore s law
And we have been totally moving towards smaller and smaller scales at that exponential rate and the benefits have been conferred in terms of performance and energy efficiency of our technologies, but as we get down to the five nanometer scale, we get to the point. which we are making known about the size of the atomic lattice of silicon, which is our main material that we use now, it is not a single atom of silicon, a single atom of silicon is 0.1 angstroms, but it takes a large population of these atoms to be able to get a particular statistical population of dopants so that it behaves like a transistor and is not subject to quantum forces and not the good kind of quantum, so at about five nanometers, if we continue to project this, we think things will start to fall apart, we are already seeing the beginning of that today, in fact, this is the diagram that we use to justify the exascale program in all of our respective countries, which is that in 2004 the Dennard scale ended, which was the scale of the energy. efficiency of the individual transistors, but we think that at least we can continue with Moore's law on that upper curve, the red curve, we could still double the number of transistors on the chip and we take advantage of that to continue to scale performance by doubling the number of cores, but what we are here to tell you today is that that is no longer the case, at least not in terms of classical lithography.
panel discussion beyond moore s law

More Interesting Facts About,

panel discussion beyond moore s law...

We expect all those curves to change, so what do you do next? And, by the way, all magically. flip just as we expect our first exascale machines, so what do we do after exascale? Fortunately, Moore's Law is a techno-economic theory and there are other ways to scale, but it will require ingenuity, it's no longer waiting. For the next step of improvements in silicon lithography, we need to create new devices, new transistors, new calculation models, so this is kind of our master diagram here: If you can't scale with photolithography, then you need to study a new architecture and being more. with competitive architecture or new packaging technologies and that is our horizontal axis and we could probably get about 10 years of scaling out of that, but beyond that we also need to have more effective materials, more effective manufacturing techniques, nanoassembly within or within a term of 10 years.
panel discussion beyond moore s law
In the lead time we could probably get another 10 years out of that and of course everything is on the table here, new computing models, so we have speakers here to cover that area. We believe that new computing models such as the digital quantum brain and bio-inspired ones. All of these are synergistic, they do not compete with each other. I always like to joke that the digital you use your Microsoft Excel to balance your checkbook and the neuromorphic, you know it will say it looks balanced and the quantum will be the superposition of all possible balances. checkbooks, but all of these extend competing in different directions that are complementary, but hey, maybe we can create some tension in the panel, so don't look at me, we have a distinguished group of panelists here we have Olga from Tuna from Jena Cova , who is we will talk about materials and nanofabrication and how we can go beyond our current focus that we used to have with tiny and digital circuits.
panel discussion beyond moore s law
Tomas Lippert from the You Look Super Computing Center, who will talk about quantum and what the requirements really are to bring that to production and Meyer from Carl Heine, who is the leader of the human brain project in the, you will talk about neuromorphic and quantum-inspired computing. brain, so without further ado I'll call Olga and she's the team leader for nanoimaging for I'm an urban Anaphase Center, the Anaphase Materials Center, very grateful, John, thank you for the presentation and you've actually made a great job setting the stage for what I'm going to talk about so I'm coming from Oak Ridge.
During the San Fernando phase, materials science and my work there has focused on developing unique technologies that allow us to think about how we can start manipulating materials beyond what we are doing now, standard lithographic approaches that will allow us to structure both materials. basically with atomic precision to enable the functionality that we're looking for now to give you a brief lesson in the history and innovation of materials and technology if we plot and put time on the x axis. and we put new technology on the y-axis, we are used to seeing this curve correctly and so it has been gradually increasing and then in the modern era it has really taken a leap forward, but what really underpins this is really our ability. processing materials in a new way, so if you go all the way back to the Bronze Age, we started making tools using the raw material as it is and then as we got into the Iron Age, we transitioned to basically being able to manipulate the materials We could forge metal to make it stronger, but we still didn't have full control of the materials and this is a really big leap as we get to what is now called the silicon era, it's our ability to understand and process materials at as it progresses. the path to the materials science of organic chemistry, but what I want to convince you of now is the end of the silicon era, something like Moore's Law, and what we are approaching is really the era of the matter addressed now, what is the reason?
Are we talking about targeted drugs? The ability to manipulate materials with atomic precision and not only in 2D but in three dimensions. And if we do this, what kind of technologies will we be able to enable? Just so you know, of course we're here. for a computer science conference, advanced computing is really one of them, this is where we can really see big gains, but that doesn't stop in terms of where the applications are, we can start thinking about atomic manufacturing for nanomachinery. You know, we can start talking about molecular motion for drug delivery and you know, energetic materials, so this exponential curve will become steeper and our ability to manipulate materials will improve.
Now, this is where we want to go, of course. Because? Because you know we're talking about the big transitions that have happened and a lot of the computing that we went from having cathode ray tubes is because we're able to make transistors, we were able to control silicon and now we have modern computing and if we want start thinking about how we want to go next, we need atomic control of where we are placing our dopants or atoms and our interfaces, where we are now, so if we think about 2d and 3d manufacturing as state of the art and Nanofab, you know what what you're talking about, we're reaching these limits, you know these 10 nanometer limits, but you know, the questions I want to raise, since this is a panel that we can discuss, can we start? manufacturing structures in three dimensions and can we begin to get below the limit of one hundred nanometers?
This is really where we start to get interesting functionality and we can start manipulating atoms very similar to how IBM's Don I did with the STM, but now in three dimensions, right? You would like John to say that you know you have these 3D stacking technologies. Can we begin to individually move the dopant materials to the precise location predicted by theory? And this is really where it comes into play, so if you want to design the next generation of materials. we have to be able to create new probes to be able to manipulate the materials at these scales we have to be able to see the materials in situ you have to be able to see what you are doing but the most important thing is that we have to have a feedback loop about what we are doing and we need to be able to use our understanding of physics and chemistry to drive these transformations and this is where we basically want to have the theory linked with our visualization signs of instability and then using big data and data analysis to basically drive us to get the materials to the states desired, we want them, so the theory is in the circuit for the Institute's manufacturing and if you can do that, you start to be controlled and if you look at the image here where it says that those are actually individual atoms, so we can extract or Annelle individual elements of the back, this basically goes from being crystalline to amorphous and we can derive the individual atomic networks and we have also been able to.
To do this with silicon and we can move individual dopants and be able to improve this precision, we can basically use data analysis, we can define and find each individual atom based on the position of the atoms relative to each other and their bond angles, you can understand. the physics and chemistry that are involved and this is where this feedback loop comes in, so what are the fundamental questions that we want to have? atomic scale controls of materials for assembly. we want to be able to create hierarchical structures that we want to be able to use. predictive materials science, so really using theory in computer science to help us drive the creation of the materials and this has to work at multiple length scales.
You have to be able to understand not only what happens when a single dopant is placed. in a material, but how that is going to turn out to be a success, the correct performance of the device, so the heat management, all this in terms and you can see that this is not a kind of and all this is of course supported for our ability to model and simulate this type. of behaviors and therefore MD simulations of course are quite important in this and if you can see here this is basically a scanning transmission electron microscope image of the aluminum nitride matrix and we are using the electron beams to move individual cesium dopant atoms through the lattice.
It's not complete, you know, weird fairy tale, we're really moving in this direction in the future, so this idea of ​​this visualization institute with feedback and the loop and the kind of theory is that we basically want to take the periodic table of elements and not think about them, you know, static NASTAR thinks of them as building blocks like Legos where we can basically move them and position them and in every way we like create the material on demand and this cartoon basically here on the right is the idea that you have this. Sort of a brain based on this that is being driven by computing through modeling that tells you exactly where you are putting your individual dopants, where you are putting the defects, but then you are doing this in three dimensions and because all of this is inside From it, we If you have the feedback, you can also cycle the performance and when you do that, when you start using the theory in the cycle, you can model and simulate a 3D structure on the nanoscale and then you can create it and this is basically the fusion of what the modeling predicted that you were actually able to do and what's amazing is the scale of the hole that basically contains 200 nanometer wide structures, so we're really entering into these domains and so with multimodal visualization, you know that we can start to think about what we can do. we have available in our toolbox to be able to do this, we have our electron, we haveour ion microscopes, but then, just like in the US and in Europe, we have our synchrotron sources so we can really start to see things, the chemistry that is taking place. see the transformation, we can see them in 3D, this was done in the advanced light source where they're looking at the transistor junction so you can start to see how this thing cycles, how it fails and we can basically use our simulations to drive this capability of so we can create, we can do basically bottom-up design, we can start with the materials, you know, predict the type of material that you want to make and then start creating it in these nanofactories and so then we can start to see what our ability to adapt is. . it interconnects defects and defect behavior, so defect-tolerant manufacturing outside of individual material processing, we start to think that okay, when you make the material, you put it correctly in a device and start cycling again, that's why it's very important to have the instant visualization that you want to be able to. to see how these nodes work, how they fail, and then our ability to do non-destructive imaging for failure analysis and then go in with those electron and ion beams to be able to correct the device on the fly.
I think it will offer unique opportunities to really advance and the ability to basically do correlations. Li map different imaging modalities The Institute will give us information in terms of the performance of the materials, as well as generally how the entire circuit behaves, so that as we move towards precise manufacturing of Fatah mcclee, you know many of the comments you receive. looking at, you really start to make each device atomically precise, how well can you position each atom and if you start talking about knowing how many transistors are there right now and they occur on the chip, can you get it right?
So the question arises, maybe not It's not necessary, if you can start doing defect-tolerant manufacturing where you understand what that basic interface region looks like and where the performance of your device really starts to degrade, you can start to incorporate some flaws in his fabulous processing, but he's doing it. This is not because you know, okay this is my list so this way a second stack you start to understand how each stack affects the others and how much you can push back and forth and of course this too It's very important and this is actually, a device for quantum computing, if you know exactly where the position of your atom is, your dopant atom and how far away from the gates, you start to understand where it still behaves the way you want in Compared to the beginning of quantum behavior, this is where you have the precise pattern, then you have the modeling, and then you have the device performance.
Now I wanted to give you guys a kind of vision of what it would take to do something beyond Moore's type of digital device and So one of the materials that John showed in his graphics was the rotating fence and you know a lot of the materials. that are proposed for this as 2D materials for anyone who has worked in the clean room, how do you process a 2D? True material device, that's a really difficult challenge, so first of all you have to start thinking that you have to grow these materials, you have to make sure that the interfaces are clean and anyone who has worked with graphene will know that it is almost impossible. to get it. big clean sheets of graphene, you know, they cover these beautiful team images, but they're usually 2 nanometers by 2 nanometers, but then you have to start and okay, let's say we get this, we can clean it up, so now we can, so, what do we start? we can do this in 3D, basically straight forward, we can grow our electrodes directly on the material, we can purify these electrodes, we can stack the other material onto Scituate nanomanipulators, we can start to characterize them because these are the networks that we can start to characterize using. our microscopes and we see exactly what the interfaces are between the materials and then we have to be able to somehow encapsulate these devices to make sure that we don't have any contamination from the atmosphere once we take them out of the vacuum, now okay. a little bit of Mac to PC compatibility but I think you can still see it so this is a picture of a basically synesthetic device so you know you know two types of graphene then you have your lime coordinate and then you have your boron nitride , so you have your graphene and this is a beautiful image, you know, a cartoon illustration, but in practice you start to think that to do this you have to have a multibeam type approach, not an electron beam or a beam of unique ion. you have multiple rights, you need to have electron beams to probe materials to move them, you need to have ion beams to cut them, if you have multiple iron beings potentially to implant dopants to move dopants and then you need to have gas injection systems of different types to be able to basically establish the interfaces with each other, so if you could leave your electrodes as well as the encapsulation of the material afterwards, but as I said, I think we are moving in the right direction even though this is an artistic representation.
We're really getting closer to being able to achieve this, so we can use a beam to basically start sculpting the material to the intended nanoribbon size that we need. We can use our electron beams to move around the dopants and get the right type of doping the graphene and when we can start using the beams as well to clean the surface impurities and start stacking the material, so I want to finish by saying: "You know, I think we're moving away from this, you know, old age." of just hammering materials and seeing what happens, so, you know, John, show this nice picture, you know, silicon has imperfections, well, we don't have to just place transistors, we want to be able to actually position them on the transistors and have them work exactly.
In terms of knowing where each individual atom is, we're getting closer, we can basically simulate in terms of graphene as we induce radiation, what kind of basically bonding structures we were going to get and those match very well with the experimental results of what we found. We can do it now that we're starting to basically have these atomic nanoforgers to be able to do defect and materials engineering for our computing applications and since this is a panel, I want to open it up and we can then discuss what is needed in materials science in this, You know, post Lesotho type, direct-to-matter technologies, well, we need real-time feedback through predictive models and data analysis.
I think that's key, we have to use physics and chemistry to drive our ability to create. materials that do not make materials and we test how well it matches the theory. We want to use image scattering technologies from the Institute to understand how these materials behave and then get feedback on our models and simulations to improve and we want to start being able to design these materials atomically. precise and tolerant manufacturing methods, and if we can do this, some of the results will be where you know we are going to process in real time the feedback and control the input problems that will reveal the physics that we are We will be looking at cascading reactions and scalable manufacturing of materials and devices so with that I'll wrap it up and answer the questions so as we prepare and prepare our next speaker we'll have time for a question from Olga to Olga.
Before you switch to yes, up here in the front, there's a microphone on your microphone. OOP, thank you very much, you haven't said anything about organic materials and I know it's a complicated problem, but the scales are now visible at the atomic level. two to three microns and if we had robots to do this, I mean, bacteria or something smaller, if you could organize yourself to do bowel movements and stuff, I mean, it's a possibility that we've had organic semiconductors around here, yeah, here it is where are these molecular machines. Ideas actually come in handy, you know?
You don't move the atoms. You create machinery and it basically starts generating the materials you want. Work has been done in this area. I would say organic things just don't focus as much, there are opportunities too. We talk about organic things that are some of the really promising things like soft materials and polymer systems because as you add features as you add complexity to the polymer system, you can start to add more and more. functionality, but anyone who has worked with polymers knows that it is very difficult to work with systems, you know, in terms of time scales, as you said, for that reason, most of our things that we have focused on are more standard. solids solid state physics materials science great, thanks, next we have Thomas Flipper from the eula Computing Center and he's going to talk about quantum.
Okay, thank you, so you might be wondering why you see two phases here, so crystal clear, Mickey, who is an expert in my Center. in English on quantum computing couldn't be here so unfortunately I don't have the pleasure of listening to it so you have to be happy with me I hope I can fulfill all this what is required first of all what is our interest as a supercomputing center In quantum computing, In fact, I would say that it is not the question of what we do beyond Moore's Law. This is a question in itself, but it's a really interesting one for us regarding quantum computing.
We want to go beyond classical digital computing. That's all. That's what we have. Of course, even if most laws state that we are, we would like to do that in this sense, quantum computing is important and, what's more, if we have, let's say, most of the losses are left with us, it will also benefit quantum computing as we just heard in the last talk, so that's exactly our perspective on the computationally difficult problems that interest you and, in fact, not only us your assignments and much more your industry, so that large industries are very interested in optimization because that is a The problem of engineering and, of course, science is rather quantum simulation, a question that interests them, but when it comes to optimization, what is see is a huge variety of different things to optimize, where optimization takes place, and difficult problems for optimization, as well as on the other hand, for example, things like people learning, machine learning, computer vision, processing of images, etc., etc., and in the field of quantum simulation, there are numerous things that they cannot do just by postman, something that perhaps we can do through quantum sampling, so that is the idea now.
The next question is who is interested in a machine that would be designed for short algorithms. Of course, if I have such a machine and can use it for optimization, I'm very happy, but I think it might really be something for number theories on the one hand. and on the other hand we have these strange guys who would like to, say, crack our codes. I don't think this should be the main goal of quantum computing; The main objective has to be to solve scientific problems now from a point of supercomputing. From our point of view, we are always evaluating new computing technologies, that's what we do every day to design the next stage of computing, so excess kale is one of those keywords and we are all going in this direction, especially in use, and we are eager to access attention from an architectural point of view not only from the processor but from the other side that introduces new computing paradigms and actually what is done here is that testing is needed deep dives and benchmarking methods to be able to compare what those machines can do.
We anticipate what they could do, in fact our strategy starts with reliable simulations on very large machines on digital supercomputers and then of course we compare them with real runs on quantum computers and to simulate on a digital computer of course you have those qubits. which are two level systems, you use them in addition to being half of half of the systems and of course you can create a state like 0 north of the northernmost state of the beta state with all the normalization and then you experience the evolution , so the recreation of Schrodinger using the so-called time-dependent evolution of the Schrodinger equation, usually a total product formula is used to do that, but there are also other methods, such as Chebyshev messages, to do this.
Now, finally, you have to evolve a very high-dimensional array, so to speak, your computer and this. is memory, that's one reason why today's computers onlythey can perform, say, 240 or a little more of those turns. This is what will eventually be called quantum super-mercy, if you can't, say, go beyond it through the memory limitation you have. Of course, it will increase exponentially because space increases now, what do we really want to simulate? These are of course the prominent type of qubits that are used by different companies, for example the transcon cubed or the exponent cubed by IBM x1 transformation by Google and of course also the D-wave flow qubit technology which is on this now famous Chimaira ship, is willing and able to perform quantum annealing, so these are the targets of our simulations for comparison and the benchmarks and of course there are more or less. mature and exactly this you will see below of course you have to translate those technologies into models and you can expect to have source models that I will capture what the real physics is of course you have to add more and more terms if I find out what the problems are with those models, for example, very often people start with two-tier systems where they don't have any two-tier systems.
This could be the IBM cubed problem and you should be aware of it. All the other levels that you have, all these things are unresolved and not yet understood, and that is the right field to do simulations and benchmarking and ultimately co-design. Now you have the Schrodinger equation and, as I said before, you have a high-dimensional space on your computer and you can test, of course, how far you go, so the Yulish program was developed for, I guess, almost twenty years now, for which was taken over by the unit in 2008, so it is a TDS II and if you see In these green, red and orange lines you see that there are worlds like transmission system cards that could be simulated from 2012, approximately 42 qubits for the shaker bits in 2010 for the two pretty bits, sorry, two qubits in 2012 and recently I have implemented this on the pie computer with 45 qubits, which seems to be the end, the next machine could go up to 46, but this is going very slow and then you'll get to 46.5 qubits and that's probably the end so of course within this range we can test and we can anticipate what's happening so we actually implement a universal quantum computer and of course , for a specific one we have to take those couplings that we have.
Represent them the physicist now if you apply this, of course, then I can ask questions and there are specific ones for quantum annealing and there are specific ones, of course, for gate level universal quantum computers and one of the easiest if quantum annealing is at this moment of specific interest that opens, at least demands that there be a solution for the most important optimization problems; of course, it is based on carefully selected physical theorems, which is the adiabatic theorem originated by porn and people in 1928 and proven in this time is approximate, it is clear, so there is a term that can be discussed, but if it is finally Realize, they may be able to do what the Larkin people claim they can simulate and optimize the entire traffic of all taxes in Beijing. a statement that they had a year ago or an experiment that was a year long, of course, here you learn one thing: it is a specific hybrid quantum array that you always have if you think about quantum computing using quantum computing, you have to embed it in your HPC environment you have to prepare what you want to calculate and then of course you could use the quantum computer in a modular way to do some type of calculation where time is critical, so if you look back at us you'll see here's a kind of Moore's law, if you look closely, they may have mastered many problems.
This one really needs to recognize that one of them is that you have masters and electronics on the chip. I don't know if the others have that. There is a lot of secrecy. here but they have it and they publish openly about what they actually do the house I have two thousand cubic systems more than two thousand it is also interesting now that they become more universal now they can do for each of those qubits I can do an individual annealing path, for what there can be two dimensions, one approach is real, let's say finally, one problem is that you start from a very universal computer and then you have a tedious way of scaling it, well, you start from the scalable chip type and then you have a It's a very difficult way to make it more universal if you can do it, by the way, that's a good question because the quantum annealing approach actually needs to end and this is what is known today in a classical Hamiltonian because this represents what you want to solve, so that that's finally, a diagonal Hamiltonian that you start from one that's quantum, that's a trick.
You have to add, let's say, a transverse field that you start from and then you decrease this transverse field and you go through a quantum fluctuation phase and finally you end up with the classical Cersei, so the theory about it and finally put the potential applications that could be done with this machine, of course, some of those applications that you would like to do, for example, in simulation, for example, negative sign problems, cannot be done with this machine because Call these Hamiltonians now. I call it stochastic. You can search for it. It's a real word. Maybe some of you have never heard it.
It's a stochastic Hamiltonian, but to solve, say, negative sign problems, at least for simple cases, you'd need an honest or. Classical Hamiltonian, yes, that's fine, but how do they do it? Of course, they have these two functions a of T and B of T and they're manipulated that way. Now again I have the same problem as the speaker before translating this from a real computer, a Mac to a Windows Machine, so it destroys all your fabulous, but well, you can imagine that one of those scientific media is much bigger than one and the other means much smaller than one, so you know what it is, now you have another annealing time and if you do it.
In this process you can expect that you will not reach the Delta here, this is the energy capital in the ground state and many excited states will reach what those excited states are. The interesting thing is that there is an exact formula to predict what the probability is. is that you get it right, that you enter the excited cider, that you stay in the ground state and that we use a landau-zener formula and this is exactly how we compare the I promise that you can compare that use with the landau-zener formula and this is under Jennifer Muller has this Delta squared as you can see in the exponent of the function e and it has a kind of velocity that is a sweet time that it goes through, so see if one is inverse proportion or the other is of course in the nominator and exactly this makes it difficult if it is small in the nominator you get zero probability and so on and you can't really stay in the ground state, but if you can use it to determine whether this machine meets the landau- The Zener formula again is really a claim that you're on the ground and then you can actually claim that there is a quantum effect because thermal effects don't follow this formula, that's an important point here, so if you can do that, you know it's quantum behavior, and that? is our testing strategy, we solve a small but very difficult problem to solve, it is characterized by a non-unique ground state and is highly degenerate, the first excited state is placed in a device, Jamela graph, we do not use the logical qubit, we simply do it we make. directly, this means that it is difficult because some of the qubits do not work when I adjust the effect, so this is a manufacturing problem, at least for those machines that we had until now, much better.
Now, by the way, it's a new one and then we compare whatever we get here using the ideal simulation, so ideal simply means a 0 temperature simulation, like always 0 temperature. Now we try to introduce a temperature into the simulation as well to fully comply with this machine, but right now we only have that, so we considered it. This to that problem, which is a logical problem, we can map it to a flexibility model that was just done here. I don't explain it, but believe it, you can have all those clauses mapped in the flex model and if you do that, we can finally express all the cough coefficients in the flex model using these models, that's what we do now.
The result is landau-zener behavior. If you look at those curves, you'd think they were right, but that's exactly a feature here. The point is that we want to cover all these deltas, so we have to use different, let's say two subproblems because what you see here are all different 2/3 problems, the black curve and the green curve are simulated, the other curves you will see. are distributed, this is at 50 temperatures, but they essentially follow the FINE ladder formula, these different temperatures and this means for us that we see here the first indication that this machine is actually doing quantum tunneling and is doing a quantum process or for this specific problem.
Don't claim it for every problem, but it is for this specific very difficult problem, which is why we think we have a quantum computer here. Now let's go to the door level of other machines. Universal quantum computing without error correction, of course, and we have several from Google. You attended the quantum leap conference in Munich last week and this was very interesting, so one of those slides that I captured on my phone was a slide on how to solve for hydrogen or do a quantum simulation of hydrogen and predict the dissociation energies it's the 2015 paper. it was already presented there, so these are the prospects of having gate level quantum computing without error correction and that is very intriguing and interesting, just like in the field of quantum annealing, so again This is something they have to follow and we want to include in our activities, so what do we do?
We try to help develop those machines, for example, the optimization of the gate pulse can be done through simulation and it can help you, first of all, to have all the physical parameters and it can help make the rotation actually a rotation. and not just something other than a rotation as seen on the left side, so all of these elements can be helped to be solved through simulation in a core design process and perhaps we will do this with one of the IDM companies, perhaps again you are interested there are still many errors in unitary evolution now comparing this for example with IBM's quantum experience you can't do as much because you have a very small system and I will be a larger system so we can go further, but right now for For example, we can do a two by two qubit address or you have this modular four sum module here and of course the question is how can you track this result of those manipulations, the result Of course, it's by frequencies, so if it has the highest frequency that meets your expectations, you'd say okay, they got it, if the highest frequency is wrong, then you'd say they didn't get it, which is just our rule now, but that doesn't mean the probability of getting it is very high, so what? you actually have as you can see here these are all those two qubit manipulations what you see is that there is a probability that should be one that is point zero to point zero zero point two seven five on the right side is still the correct result, but there There are also wrong results and there are actually many wrong results and some right results, but the odds may not meet your expectations, so see the other one here.
I have many more. I can't show them all because we don't do it at that time. what is the summary here very simple algorithms identity operations two plus two cubed errors measurements of a singlet state now error correction has been used to test this random experience in some cases we were able to observe a qualitative agreement with quantum theory for systems of qubit errors cannot be identified by users and cannot be attributed to the specified Keter as there are large differences between calibrations. Of course, we are very interested in working on a larger system. Here they are, it's promising what we've heard that the fidelity is much higher alpha cubed now if you want to hear more about this as a paper that has now also been a test for publication, you can find it in the archive and it's called quantum computers based on benchmarking gates, so you will see that all these technologies are quite long, each article is a bit tedious.
I read, but of course we wanted to be very careful in presenting our results. We now hope that this Google vision of the quantum chemistry roadmap will meet the difficulties, but at this conference in Munich they did not give us the last two slides. What would it mean to have? Let's say quantum computing is done today in a supercomputing center. I think we certainly need a facility for users ofquantum computers or a federation of such facilities to bring us into contact with these technologies very soon. The first thing we have done is try to translate the preparation of the technology. levels to the quantum realm and if we do that we will see the export of experimental devices maybe between two and maybe four, but four has not yet been reached IBM Google the KT could be between four and starting to border on six, but it is not there either. clear the difference for different technologies and we would locate the quantum annealing device, of course there is the issue of error correction, it is not published as much and many people believe that it is not necessary, they are higher up, you can have it today, you can buy it .
We can do optimization and many optimizations have already been done. What should we do next? We will establish after establishing a quantum computer user installation. We have to provide multiple computing devices available, not just one or two. We have to go for everyone. of them there must be a competition for users in science and industry here in Europe we create a maturity ramp of systems that are available on the host and operate analysts that exploit quantum phenomena we want to host and operate multi-cube devices for quantum computing Approximate Causes of Google because it is practical quantum computing and we want to give access to experimental devices, that is what we really have in mind and this must flow from the user's perspective through a cloud access, it has a universal platform, the platform of development which is a platform as a service technology and then there are supercomputing technologies with all those modules that are then available as fonda modules, maybe in the future we will also have included a neuromorphic technology that says shrink up, ok, thank you very much, And are there any questions for Thomas as we get ready?
To change here, any question is fine, we will see you in the panel part, then thank you Alice, so next we have yes, clicker, very important, we have Meyer from Carl Heine, so Carl Heine's professor at the University from Heidelberg and also the director of Human. brain project - the European Union Thanks Carl, well good evening, last talk and we hope to have some time for the panel. There are many ways to represent Moore's Law, in particular, that something is happening, that there is a change, and the plot that I have chosen here. shows the number of transistors you can buy for a certain amount of money, like 1 dollar, and what you get today is approximately 20 million transistors and what you see is that over the years this number increased rapidly and now it remains stable stable and it is clear that it will eventually fall now, which is bad news for device systems that are based on conventional CMOS transistors because it means that if you want to have ten times more transistors than you normally have in your cell phone for the last 10 years, you will have to pay. a price that is 10 times higher and that is clearly unacceptable, so now people are seriously looking for new architectures and that is a new development because 10 years ago, when you came up with your architecture, the argument against it was that people said : well, there is Moore's law and in two or three years normal computers, which are much better, will always outperform the systems you have built with so much effort, so it is not worth it now that it has changed dramatically, now everyone They are looking for special architectures and the good.
The important thing is that you can first try to make them from conventional CMOS transistors, you can even use quite old-fashioned technology to demonstrate these architectures and then of course that paves the way also for new devices like we have so beautifully outlined in the first talk now a of the new architectures is actually that I was inspired when you came up with your morphing computing all these three things are slightly different I won't explain this here now but they have one thing in common are they copying? brains, are they building artificial brains? Actually, no, I think what we're trying to do is transfer what we call structure and function aspects, just structure and function aspects, from biological circuits to electronic substrates, and what do I mean by structure?
Well, we know that the brain is made of cellular connections and a network, so that is the static structure, but there is a time that is a very important variable. This structure changed as it evolved over time, so there is local processing in cells and neurons. For example, there is communication, of course. Probably the most important thing the brain can do is learn to adapt to incoming data and this is clearly something we need to emulate now. What are the objectives? There are exactly two goals, one is to help neuroscience. to better understand the brain that is computing for neuroscience and the other, of course, is to use the principles of brain computing to do calculations outside of neuroscience, the data analysis of cognitive computing, for example, that's the brain for computing and what we thought it wanted to achieve is what the brain does very well, its energy efficiency, its speed, I will comment that it is the robustness against failures, very important when it comes to new devices and, again, the capacity to learn, which I consider the most important thing, so how can you make a neuromorphic example? a computation, three examples in very simple words, what is always done is to produce many of the same ones, almost the same, this is what the brain has, it is a massively parallel system and currently there are like three different approaches, there is the approach of use millions of classics. processors there is the approach of using millions of custom digital circuits, but the digital circuits represent biological circuits and the third approach is to use millions of well-mixed signals that are partly analog.
Computing has to create signal circuits and all three approaches are generally followed. in the spinnaker project is part of the human brain project carried out by the University of Manchester and the digital circuits customized by its true north led by IBM at the IBM research laboratory in Harmelin in California and the brain scale project led by we in the Heidelberg group also as part of the human brain project, if you know, look a little closer, the spinnaker is actually a system called many and of course it is based on arm corals. It is run by Steve Furber who is one of the inventors of the arm architecture and now has the goal of building massively parallel systems of currently 500,000, very soon it will cost 1 million arms and these arms caused relatively simple professors, they only have integer arithmetic Believe it or not, they run at a relatively low speed, the clock speed has a certain amount of memory on the boat, but the most important thing and that is the key to the spinnaker project is that they have six directional links that connect the chips in six directions and are capable of transmitting what are called spikes and that is a very important concept of most neuromorphic systems.
They are based on spike communication, these action potentials that are also seen in biological circuits and others comment on that later and the spinnaker can send six million spikes per second over each of the six bidirectional links, so In effect, what the spinnaker is is a program-controlled simulator in real time, so it is based on little for normal computers, hundreds of thousands of them that communicate was a network that is quite modeled after biological systems , in that sense it is also a neuromorphic system, IBM's to norv system is now definitely a machine that is not a phenomenon, it is no longer a processor with a separate processing element and memory, it cannot be programmed, it can only be configured in a sink, there is a difference and it has a large number of neurons and these neurons are wired, there are a million of them, so there are really special circuits that do nothing like being and you and there are other circuits like 256 million, it is say 256 for every neuron that synapses mimic and those are only 1 bit enough connections so I'm not very biological but it's certainly a network structure and these senators are static by the way so they can't learn that you have to configure them and from then on they are static, but now it is the most complex artificial neural network system in the world, it is based on real circuits that imitate biology, but these circuits are extremely simple and cannot learn, finally there is the physical model system which is what we do in Heidelberg and goes together with other groups in the human brain projects and why do we call it physical model?
Well, it's really a bit like these quantum machines, well, you say I don't calculate the Schrodinger equation, but I built a model that could imitate it and what we do is we have circuits that imitate the movement of the charges in the cell, you know. that there are ions. moving through ion channels is for the cell membrane and we move charges through transistors, so it's really a physical model, so it uses local analog computing, but like the brain, it does binary communication in continuous time like a small technical detail, it is implemented in couple forms of scale integration. of hundreds of thousands of years 50 million synapses in entire wafers and very important input there are several mechanisms of plasticity and learning implemented in the circuit and a very special master runs 10,000 times faster than in real time I will comment on that now the important thing is that these They're not just chips or systems that sit in the lab, but we really put an effort into the human brain project to make this look like real computers are fine, like the things you have at home or you have your supercomputing center. , so there are these two systems that are now somewhere in Europe, well, one in Manchester, one in Heidelberg, on the left you see a lot of spinnaker machine calls, half a million calls on the right, you see the physical model of brain scales of the machine, 24 modules, only briefly describes them and what is important.
The point is that these machines are part of the platform concept of the human brain. What is the human brain project? The goal is to make ICT tools available for brain science and there are many such tools and one of those tools is neuromorphic computers and those machines. They are accessible through a joint platform in the human brain project, you can log in to those machines and you can run experiments and work on them if you are interested, send me an email and we will configure you to what extent. Are we here with neuromorphic computing? Well, if your goal is to emulate brain circuits, that means producing spikes, producing action potentials, emulating neural cells, emulating plasticity, you can do it on a supercomputer, you can do it on your own morphic circuits.
Of course, you can do it in your own brain, which is the best of all solutions, and if you ask a question, you can ask us how expensive it is in terms of energy, for example, and if you measure energy in joules, there are numbers incredibly small in our brain actually costs $0.10 per joule for a synaptic transmission that is 10 to minus 14 cans of joules, it doesn't make sense. I mean, how much is it? It's very difficult to interpret, but it's probably interesting if you compare it to supercomputer simulations where, depending on the detail of the model, it's between 1 and 10 to the minus 4 joules or 10 to the minus 3 here and that means there's between 10 and 14 orders of magnitude between a simulation and a numerical simulation and our real brain now in the middle, but of course this is a logarithmic scale is neuromorphic computing that generally, because there are variations by a factor of 10 to 100, but generally places at 10 to the power of minus 10, that is the factor 10,000 of distance from the brain, but a huge factor more than immersion away from the latest technology. supercomputers, so this is really a way of building neural circuits and running them with a reasonable amount of energy.
Now the most important thing is time and what I show here is what I call time for a day. what do I want to say with that? How is it done? I need time for one day, well, what is the time you need to simulate one day of activity in the brain? How much do you need in your own brain? The values ​​of a day are in real time. Your brain is a day for a day. The supercomputer normally simulates that people run today runs a thousand times slower, so you need a thousand days for a day, which may be fine if you only simulate a couple of milliseconds, hundreds of milliseconds, maybe minutes, but when it comes of days and years, this can be problematic, okay?
So we're removing machines again, I feel like there's an interesting niche there, so some of them are real time, like the spinnaker system for example, so that take a day and that is very good, for example, if you want to study applicationscomputing, there is also this special machine when scaling for a day it takes ten seconds and that, of course, is extremely important when it comes to learning and development. Why is time so important and why do I think time at the end of the day is more important than energy? Well, it's because time is the variable that describes everything. the changes obviously in the brain start with the detection of causality at the synapses and go all the way to synaptic plasticity, the development of learning, maybe even evolution, I don't know, in nature or in real time, this starts at ten minus four seconds, 100 microseconds and development and learning are days. years is fine, so real time means that if you want to study development you'll probably have to run your program on a supercomputer for years, maybe thousands of years, which would be silly, it won't be accessible.
The accelerated model, for example, allows you to access. this and again I think learning and development is the key to making use of artificial systems now before we close because this is a panel that has been said many times I hope it gets here. I would like to raise some questions, some research questions because are we ready with neuromorphic computing? No we are not, there are many questions that we have to address and many of them, for example, are being addressed in the human brain project, so there are areas very soon what is the role of spikes, how does the learning? do they work or can be implemented what are neurons, how can we simulate them and what about the device, its ocular capacity and I labeled some of them in red here because I think they are the hot topics, so the question spikes are just a possibility of energy savings. or other arguments or spikes like time coding time coding time learning or stochasticity for sampling machines I guess those are all useful applications of spikes more important learning we have to go to continuous learning current applications of deep learning carefully separate between a training The phase and the use phase this is not what we do in our brain, it is continually learned and I think we develop theories that will allow us to do the same in our artificial systems, but we have to watch 50 million movies to learn what a cat is. like what Google does sometimes with deep learning applications, now it is not necessary, sometimes we can learn from a single event, a short learning or learning with very small tag data sets is another important learning from the deep learning network. applications that we do not learn by having a learning process. computer technician or brain, the learning mechanisms run on the network, so there are ideas for network learning, sequence learning, not only learning, image study, but also sequences in time and finally the role of accelerated machines, neurons, what are good neuron models, multiply an ad as we have in most applications of artificial neural networks, leaky integrated firing point neurons, such as across the north, are a very, very simple neuron model or even tau or the substructure in neurons should be taken into account as if there are compartments, there are dendrites, these dendrites are active and they are non-linear and there are many theories being developed right now that tell us that a single neuron can do very, very interesting pattern recognition using the internal structure of the neuron.
That's why I call this a hot topic too. I think this is a very important development, finally breaking down variability, fault tolerance, self-calibration, homeostasis, how can we make use of devices that are not all the same and that have high variability and of course this is the path towards nanodevices, finally, very soon, an HTP, of course, the systems that we now have the system that IBM has, they were all conceived around 2005, that is, more than ten years ago, okay and now I think that We are in details about the progress and neuroscience in a situation in which we are ready to make the next generation and in HPP we are preparing the next generation there are certain aspects raised for the turning work and the branch is put project.
I won't go into details now because there is no time left if you want the panel but you can ask me the important thing is that today we have functional prototypes and by 2020 we will have operating systems and the objective of the systems is to have cognitive learning machines. Thank you so much. Any questions for Karl-Heinz before we get into the panel section. Do not go yet. We have a lot of time. before dinner and I intend to make the most of the conference organizers' offer so that we have time for a real panel

discussion

here.
Yes, you can take a seat. If you have questions, raise your hand so I can ask you. Do you know the IBM machine? What good is it if you don't learn well? Actually what people do is they learn externally and that's actually a very useful way of applying this machine, so that you have a softer model of your digital circuit and then you do the training on a conventional computer, you just simulate the IBM machine on a conventional computer, you get a good network with a good set of connections and parameters and then you load it into the hardware, which then runs it very efficiently, very efficient in terms of energy and time, why is it maybe a good idea?
I mean, the nice thing about this is that once you have a good network, you can produce thousands of chips just by uploading a working network and then you can give it to users. and you don't have to retrain, so they actually separate the training now, that's not possible for analog machines, by the way, because if you trained it in software and uploaded it to the analog machine, maybe it works a little bit, but No. very good because there is variability in analog electronics, so analog machines like our own brain have to continually learn and that is a very fundamental and important difference.
I'm not saying one is better than the other, but they are fundamentally different, so please raise your level. hand, if you want to ask questions, then sir, tell us there will be a question here, but while we wait for Thomas, I just wanted to ask the panelists a question before we go in, one thing that became clear is that From what Olga said, it seems very difficult to make some of these new devices with multiple different types of beams. From what you said, Carl Heine we were making great progress for neuromorphic and yet we are still limited by the speed at which we understand what it means. means for the brain to calculate we have a lot of things under our belt, but this is one of the key areas of progress with quantum scaling, entanglement scaling, do you believe in the future after Moore's law after lithography and that Can we continue exponentially? improvements in any aspect of these machines as we advance or get used to a new scaling regime, of course with quantum computing there are specific questions to ask and answer, so this won't be general computing, at least not SB like it is today.
They can only complement the entire computing landscape in some very difficult areas, such as computationally difficult problems, some simulations that we simply have not learned to do and our digital machines and, in this sense, it will not be the solution to the end of the Law of Moore, clearly. no, but on the other hand, if most laws don't and new technologies come with quantum computing, they also benefit from that, so I think that's a very good way to look at it. I think we have technologies, as you know in your graph that you showed maybe. 10 years from now, but as we learn about the materials and how we can start thinking about new architectures, not just planar transistors, start making these stacks, you know, in memory computing, I think we could start moving towards the new law we can begin to develop.
These technologies in this generally there is a new exponential. I think it may be possible and well, I mean, some of you seem to imply that there are so many different theories or ideas about how the brain computes that we're not sure which one is correct. So how can you start building machines now? How can they come up with some kind of rapid development? Well, actually I think it's very important that we start doing it now and the reason is that we have to learn how to build these massively parallel, digital or mixed signal systems there is a lot to do in terms of grouping, for example in terms of chip design , there are wonderful tools available for digital chip design and also for analog chip design, but there is very little for massively parallel mixed signal analog systems and CIE tools for example simulations and everything that has to be developed from time to time we have to start and get going, the other thing is that we have to design a lot of configurability variability into our system so that they are not too hardwired so that as a user you have a lot of freedom to try different ideas, try a lot of principles and test theories and then see which one is good and that's the way we're going to progress, whether that's going to be exponential, I don't know.
Probably not that soon, but it's a very exciting way because we're on the path to discovering the principles and I'm very excited to be a part of this. Thanks, sorry Thomas asked a bit technical questions starting with Callaghan but I'll finish with Thomas so The question is really for both of you, you guys did this, show this comparison in terms of Emma's energy, how much more efficient are the neuromorphic devices compared to supercomputer for supercomputer-based simulation? P wave made a similar argument a while back. how much faster your quantum optimizer is relative to a supercomputer running the same simulation on the supercomputer, and in the case of deep wave colleague Marcus Toyou, he showed that if you use better algorithms, you can solve the problem on a single GPU more faster than in another. a deep wave computer at that time and then his argument is and that's why it's a question for Thomas.
Well, in that case his argument is the d-wave optimizers like the scale, the same complexity, the same scale as the classical simulation, so there is no advantage. in the case of the P wave, in your case, how much do you know that the people on the simulation side have actually done better and that unfortunately it cannot be improved and in computer science we know that we can improve somehow? five six orders of magnitude without changing the hardware, yeah, it would be great if you could do that. I mean, I'm just referring to the published numbers and I know that my colleagues at HPP, but also outside of HPP, do brain simulations on an artistic scale.
They actually deliver pretty consistent numbers for the speed of their algorithms and that's all I know and they use state of the art computers they use software that I believe has been developed for quite some time and has been optimized for the use of special processors like graphics cards, but what I am saying is that it will not be possible to demonstrate that I would be more than happy if there was a huge improvement, particularly over time, perhaps five to six orders of magnitude, and that would be very welcome and if that is being done such Maybe in the end neuromorphic systems are of no use eh, that could be a very interesting result at the moment, however, I don't see it and as long as I don't see it I think it's still worth continuing, so I think I have to answer the question. second part of the question, okay, of course, I have an answer here, you know where I'm coming from.
I am doing simulations in network gauge theory and now I can look back at more than 20 or 25 years of activities and If I look back at the year 1987 or 6, what I see is that at this time, of course, there has There was a tremendous increase in computing power, at this time very important people who stopped doing everything until computers reached, say, a factor of 10 million. or a billion more speed, in fact, during this time, the algorithmic development, as of course preceded it, is a hybrid Monte Carlo technology and then the mathematicians came with many new proofs of a service subsidized by solvers, now with algorithms multi-scale, so all of this has contributed ultimately and then slandered about this piece and that has contributed to an improvement factor of probably 10 to 100 million from this algorithmic side so that now people can take advantage of the simulations, but never before one might have thought about that, this is algorithmic progress.
Why do you think algorithmic progress is limited to digital computers? Of course, if someone comes up now, it is a problem that is solved with a very new technology that in itself is extremely complicated and it must be said that you have to do a lot of manipulations to get something out. Not at all, you can't expect that this can't be overcome if you think a little bit about algorithms but not so much about a systematic way of improving algorithms, this is often becausechance, so on the other hand, the same contingency if you have these quantum technologies at least on the annealing algorithms side, it is said that it is easier to do this with gate level algorithms, essentially we only have a few of them and you can actually use exceptions from those that go to optimization, so what I want to say is that I don't know that now there is a transition phase and maybe in five years there will be a solution using a quantum optimizer that can no longer be surpassed by a new algorithm because no one has any idea how to do it in a digital environment. computer, on the other hand, I also agree with Caroline, the end of digital computing cannot be foreseen because right now the problem is CMOS and most of the time, maybe there are other technologies, no one speculates with so many people, maybe you can. go up with the frequency if you go to superconductivity maybe the baby, let's say, the effort that people make now with quantum processors will finally pay off for digital electronics that need at the zero point zero one Kelvin and then you can make, let's say , operations in two or five terahertz.
I don't know how far they can really go. If you can do that, this would be a tremendous improvement by a factor of a thousand in itself. That's what I think is cool, so I think the microphone is there. Keep asking. questions, but one of the aspects of the human brain is that in memory retrieval it is always an associative memory and I think this is connected to the learning process, but these similar emulators are in some sense associative memories, well, you can configure them as associative memory there is a group in the project and HPP that configures those who attack you there works as associative memory that actually depends on the way you configure the learning process, so yes, and emulated, then it was that you can emulate, you can and it's just a group of will connect for benefit if you are interested ask a question here.
You know the brain is fine for some types of jobs, but for others it's not so fine, so we can't multiply floating point numbers really efficiently, but for others. is good, so is there a way that, beyond this heuristic way, we can predict what is the type of problem, the class of problems that can be solved in a particularly good way or neuromorphic devices or that solve problems using the quantum computing? We can develop a theory to better understand what type of computing will help us for what types of problems. Yes, I want to say that this is a very, very important point and I would like to reinforce what I just said.
I mean, I made this claim that somehow supercomputers are super slow brain computers and morphix aren't super fast and of course that's complete nonsense when it comes to the aspects of real computing as you know it. , like multiplying numbers, for example, the exact opposite is true if I ask you to multiply two. numbers will actually take you about a minute and on a supercomputer its billions per second it is so clear that arithmetic is not what the brain does and the fact that brain simulations are so slow and so inefficient is exactly why we try to map the functioning of a physical or biological system in differential equations that we then solve numerically and that we can solve numerically, so I want to say clearly that there are, I'm sure there are problems that the brain is optimized for and let me give you an example that I firmly believe that perhaps in the end, the most important and that is the temporal analysis of the sequences, along with the spatial extent, so it is a special temporal pattern recognition.
What the brain is really good at is detecting irregularities and Oh, screams, so if you look at this room, almost nothing. changes and there is no need to do any processing, it is just someone moving their hand and this is something I have to recognize and process, so I have to prepare the data in such a way that only the changes are transmitted to my brain, then I process these changes and I use those changes to update the internal image that I already have in my brain about this room, so the analysis of the temporal sequences of the temporal developments related to the creation of predictions in particular is very important and that is important for the brain, but it is also very important for applications like for example in kharrus or something like that they say exactly that they look for changes and make predictions and there are theories that actually try to do this in a session similar to a brain, let me give you an example that may still be current. a little bit removed from the brain, but there is Jeff Hawkins' group in the UK, in the US and for years this company Numenta and you developed the hierarchical temporal memory model that I think has a lot of similarity with cortical architectures and which is used to predict temporal sequences and that to me is one of the most important applications that I see for brain line computing, so this actually speaks to something that I just want.
I think everyone has realized this point, but put it on the table, which is a popular press when they hear about the imminent end of Moore's law and say, "Well, and we'll do the quantum or the neuromorphic," and to To re-emphasize that point, do each of you believe that any of these technologies would subsume what digital computing currently does? Or does it expand computing to areas where digital isn't as good? It's me. Okay, so you have a similar example to July, so adding numbers, for example, is essentially, at least as far as I know, a deterministic process and maybe has no advantage.
There are interleaving for that, it can't really be used, there are articles on this, I'm not sure it's easy to conclude, but this is actually my knowledge about it, so we won't have the advantage of doing digital computing using computers quantum in a very simple way. because it is not possible, on the other hand, what we could anticipate in this optimization is the influence of, say, the training of neural networks through this best optimization, etc., there is quantum simulation, these are specific areas, they will not rely on either surrogate digital computing, but they will complement digital computing and, furthermore, to do this you will need preparation of your computing, you will need post-processing of your computing, you will need to have a lot of HPC to make and include those quantum devices in HPG I.
I just found out from Caroline that the same is true for neuromorphic computing at least for some practical applications and as far as I know, who knows, it has been included in an HPC system to understand or learn how this can be done, so my prediction is that there will just be a sort of hybrid type of systems or modular type systems where the quantum element is simply called by a subroutine or whatever by a function call and then you get a reasonable response from this system and then it is used again in your HPC device, so we have to be let's say very creative to continue with digital computing.
I think that's exactly what we learned today. There is a plugin. I'm sorry that let's say it's covered by neuromorphic technologies that could be covered by quantum technologies, but at the core we certainly have to further develop digital technologies and, furthermore, I think you've already known for a long time that chip manufacturers Intel of the world have driven the chip market, this is what you have and end users have been using. I think a lot about this technology where we're thinking about this kind of little Internet of Things where the end user specifies what the need is and then you can adapt the device and the chip architecture to solve this particular problem and here is where you said you know you have neuromorphic, you have quantum and I think it's not me, just this digital platform.
I think in the future there will be very different chip designs depending on whether you're you know Google or Nvidia of the world that might have some unique applications of what they're trying to solve well, basically, the problems they're trying to solve well and that leads to another question: is anyone else, by the way, connected? the audience with microphones, okay, yeah, Thomas, okay, Thomas has a question, so I'm going to ask you a question. I'll give it back to you, so Olga mentioned something that I want you to understand, and that is that we have seen some of the great advances in quantum and neuromorphic and even in digital Co CMOS in companies that I do not consider traditional players in the market, We have Microsoft, we have Google, as Olga was saying, the way the chip industry has gone, it has focused a lot on horizontal integration, we are using the same product to solve many different problems that we are seeing in the big advances in amorphous quantum of companies that are very vertically integrated, Google, Microsoft focused on a problem, do you think the computing landscape will change?
How different do you think the computing landscape will be? Do you think current players are ready for this change? Which are? I can only guess. But it's the reason why Google is investing so much until I would say in knowledge about. Future computing is that Google is truly a data company and there is probably no one else in the world who recognizes so clearly that understanding data is key to success in the future. If I'm a pure hardware manufacturer, I'm probably not so clear about it, but I do have to deal with these huge amounts of data.
Facebook is another example where you would say, well, these people make this dumb website, that's why they're also interested in northern computing, because they have to deal with these huge amounts of data and sort of. Out of desperation, I think they invest in new computer architectures and that seems quite remarkable to me, oh yes, with the Googles and Facebooks of the world. I think it's not just about the data either, I mean, you know, they have the servers and the power is a huge amount. It's a big deal for them, so they design some kind of chip that would reduce the consumption of their data centers, you know, they don't necessarily have to run HPC type of operations, but the energy for them is huge, you know, drive the data. energy I think these companies face very different needs that were simply never investigated, but in general, chip manufacturers have seen a lot of changes in computer companies in the last 25 years, of course, and some of them, let's say, survive , others emerged. and some have been reborn and all that, but I think that large companies have learned much more, they have studied and seen how disruptive technologies can destroy a large structure if it is too oriented towards standard technologies and providing standard technologies for them to learn from.
It is getting better and better and what they do is a kind of integration, perhaps through a subcompany, but as I integrate developments of this type to be ready at a given moment, the very big ones will also try to be theirs. own suppliers of whatever, even their own banks, so there are a lot of economic theories behind that. In this sense, I think maybe it will be the other way around, if you have sketches, it could even become more static and it will just be a miracle, yes, maybe the government will finally have to have a say in it because the economy must need fluctuations and the economy needs to have new, let's say, companies that are emerging that you have seen in the last hundred years, maybe you already see what is coming in the future, sir.
Thomas, it's probably Olga, but perhaps the other two gentlemen can comment. You know, on the one hand, we just discussed that there may be new solid state devices and you touch on it, but what you really talked about about these new ways of doing things and you have any idea that you can give us about the timelines that you see from these new ones is enough. techniques that come up to large scale production, because if you know that making something that is useful to the consumer really means that you have to have a production. to scale and you know, oh, it will be, you know, it will always be ten or fifteen years.
Ultimately, quantum computing may always be in the future, so I mean digital things, you know, if you look at the small. companies or startups that are seeing that this is the direction to go, so David Lamm, who started the glamor industries, is kind of one of his kind in lithography, which exists, so his new company is really heading to these multiple beam systems. Your current technology that you're developing is basically a thousand beam machine that would allow you to do these kinds of direct and correct approaches with a thousand beams simultaneously, of course, the performance of that is not the performance that you would have at the global foundry level, I think. but, however, if you can demonstrate the potential that this is going to enable new technologies, the ship from a thousand to, say, a hundred thousand and beyond is not huge, just where you have multiple parallel machines running and I don't think it's necessarily ten. in a few years because I mean the startup is already making a type of thousand beam device processing technology now yeahthey can demonstrate it and there are many in terms of not just multi-ion multi-beam technologies that are looking to incorporate this multi-beam Zeiss has a system.
You know, basically, kind of a thousand electron beam, so once you start going from one to, I think the hardest thing is going from what is a single device and then to a thousand, once you get a thousand, I think the expansion becomes more reasonable. Okay, some final words. I think we're out of time right now, but any final comments from any of the panelists, but I can only say what he just said. I think it is very important that this is done and at the same time people work. about the architectures that all these things are done in parallel so that once the new devices are ready and can produce large quantities, a certain degree of reliability those who built the architectures will be more than happy to integrate them.
I think I would say the same thing as this. Co-design framework where you're not just making the material to make the device, but also figuring out the architecture of the device that goes in there and then the software, how is it calculated on these devices, that are made together, is this key. Thomas, any last words. I think the last thoughts are on your side. Okay, I thank you all for staying here later, but this is a great group of people, so thank you to our panelists.

If you have any copyright issue, please Contact