YTread Logo
YTread Logo

AI ROBOTS Are Becoming TOO REAL! - Shocking AI & Robotics 2024 Updates

Jul 01, 2024
I don't know about you, but these last 12 months in

robotics

have shaken me, as the advances we have seen are almost terrifying. At this point, we're going to break it all down chronologically from July 2023 to now, but let's stop. I just set the stage very quickly. You had AI companies release the craziest humanoid AI

robots

that are legit and start thinking and learning autonomously. I'm talking about an Optimus mecha, this whole new generation of androids, then you had big dogs like Boston Dynamics, oh, and get. There is even a company that is low-key and created an AGI-ready robotic body.
ai robots are becoming too real   shocking ai robotics 2024 updates
It's been that level of crazy and honestly it's been crazy trying to keep up with all the important

updates

and developments, but don't worry, I've got you covered in this video. We will take it step by step by chronologically reviewing all the most important

robotics

news from July 2023 until now, so let's get started, so a mecca, an innovative robot from Engineered Arts that speaks and imitates human expressions and movements, now draws two, making it A fascinating mix of technology and art, this artificial intelligence robot looks like a human but has no specific gender or race with a gray rubber face and hands.
ai robots are becoming too real   shocking ai robotics 2024 updates

More Interesting Facts About,

ai robots are becoming too real shocking ai robotics 2024 updates...

He can move his arms, fingers, neck and face, allowing him to show emotions and even talk into microphones and cameras. and facial recognition software, you can interact with people like a human. Ai recently opened and a Norwegian startup called 1X made an exciting announcement about the launch of its first physical robot called Neo, which looks like a futuristic astronaut, can move, handle things, talk and senses its surroundings thanks to its arms, legs, talking cameras, microphones and amecha sensors and Neo are excellent examples of AI and human forms, they not only respond to commands, they can also show their intelligence and creativity, such as when they draw, let's examine how he manages to do this, so first let's talk about acha, one of the

robots

with most advanced human robotics in the world, representing the cutting edge of human robotics technology.
ai robots are becoming too real   shocking ai robotics 2024 updates
It is specifically designed as a platform for the development of future robotic technologies such as artificial intelligence and machine learning. One of the most impressive features of achca is its

real

istic and human appearance and movement. Acha can imitate human expressions such as smiling, frowning or winking using 24 motors in the face. He can also move his head, eyes and mouth to follow and interact with people he can talk to. in different languages ​​and accents using text-to-speech synthesis or pre-recorded audio You can also gesture with your arms and hands using 20 motors in each arm and 12 motors in each hand You can even play rock, paper, scissors or give a thumbs up but AA Not only is he a pretty face, he also has a high degree of dexterity and fine motor skills that allow him to perform tasks that require precision and coordination, for example, AA can write with a pen on a keyboard, play the piano, or solve a cube.
ai robots are becoming too real   shocking ai robotics 2024 updates
Rubik. He can also draw with a pencil, which is one of the newest and most amazing skills of him. Aica's drawing capability was developed by Arts Engineering in collaboration with researchers at the University of Bristol and the University of Bath. The project was funded by the Engineering and Physical Sciences Research Council. and presented at the International Conference on Robotics and Automation in May 2023, the project aimed to use artificial intelligence to make drawing more imaginative and precise. The researchers used a combination of techniques to achieve this goal, such as generative adversarial network style transfer, a method that can change the style of one image to match another, such as making a photo look like a painting. inverse kinematics, a technique for determining the movements of a robot.
Arm based on its intended final position, like holding a pencil. Proportional integral derivative controllers. These adjust the motor speed and force to obtain a specific result, such as drawing a line, using these methods. Aica can draw different types of images, such as portraits, landscapes, animals, and even abstract art. He can also draw images based on text descriptions, such as a cat wearing sunglasses or a fire-breathing dragon. even make drawings from your own thoughts using random data as input to draw an image. Acha first uses a gan to generate an image from a text message or noise, then uses style transfer to apply the desired style to the image, such as a watercolor sketch. color or oil paint, then use IK to calculate the angles of your arm and hand joints to trace the contours of the image, finally use PID controllers to control the motors of your arm and hand to draw the image on paper.
The result is impressive. mea can draw images that are not only

real

istic but also expressive, diverse and original. Now most of us are familiar with open AI thanks to GPT chat. However, open AI is not just about creating software, it is also focused on hardware and that is why they have developed Neo. Its first physical robot, Neo, is a humanoid robot powered by open AI systems such as GPT 4, electronic codex and clip. You can learn, understand commands, and even create images or code designed to look like a future astronaut. It can move, handle objects and communicate with open AI. plans to use Neo as a testbed for its vision of artificial general intelligence (AGI), which is the ultimate goal of creating an AI that can perform any task a human can perform.
It was presented in July 2023 at the Neo accelerator, a startup accelerator founded by Silicon. Valley investor Ali Partovi partnered with Open Ai and Microsoft to provide free software and mentorship for budding AI companies and was one of the highlights of the event where he demonstrated some of the skills and tasks he has. Neo is one of the most innovative and ambitious robots. In the world that represents Open AI's vision for the future of AI and robotics, it has a customizable design and cutting-edge technological features. Its parts can be changed depending on the task or location, for example, you can change its legs for wheels or its gripping tools depending on what it is. necessary, you can also interact with its parts wirelessly via Bluetooth or Wi-Fi, in addition, Neo uses open artificial intelligence systems such as Dolly gp4 codex and clip that allow it to understand and respond to spoken commands, create realistic images from text, write function code from spoken prompts, and learn from text-based guidance to identify objects, faces, and feelings with these skills.
Neo can perform a variety of tasks such as drawing, playing games, and solving puzzles, for example, he can draw pictures, learn and play games like chess, and solve logic puzzles like Sudoku AA and Neo are humanoid robots that can draw and do amazing things using artificial intelligence, but they are also very different in many ways: they represent different approaches and possibilities in the field of robotics and AI. Acha is designed to look like humans and interacts with people using normal language. and movements his art comes from turning words or noise into images which he then draws on paper, although aa's artwork is unique and expressive, the quality depends on how well the images are generated and traced, on the other hand, Neo is Designed to be versatile and capable. any human task that uses artificial intelligence, you create art by transforming words or noise into code which you then run on your computer or internal screen, while Neo's drawings are precise and realistic, they are limited by the complexity of the code and the screen resolution of both robots. notable but have different goals AA aims to express himself through ART valuing creativity and diversity, while Neo wants to learn and improve his skills prioritizing intelligence and efficiency.
Both robots are impressive, raising social questions and challenges. AI robots like Amika and Neo can contribute a lot. of changes in society, some good and some bad, can make learning and entertainment more exciting, help us with difficult or boring jobs and generate new creative ideas, but they also have their disadvantages, they can take over the jobs of some people, question the importance of human values ​​or even harm us if they malfunction or are used incorrectly it is important that we design and use these robots properly so that they benefit us and do not cause harm AI and robotics have an exciting but unpredictable future, these futures AI robots will be smarter and more humane, impacting how we live, work and see ourselves.
We are committed to guiding this future by ensuring the ethical and responsible use of these technologies. Our goal is a world where humans and AI robots coexist harmoniously. Right now, let's talk about China's latest craze in human-like AI robots, which China is creating. robots robots that look and act like us, but apparently better first let me introduce you to ex robots, a company that specializes in making robots with real-time facial replication, meaning they can copy your expressions and emotions in the blink of an eye eyes, Robots EX CEO Lee Boang demonstrated this by making faces at a robot that then mirrored them back to him.
It was like watching a strange game of Simon, He Says Him but with more wrinkles and less fun. X Robots claims that its robots can interact naturally with humans thanks to Ai. and language models, they say they can understand what you say and respond accordingly, but what if I said it looked like a melted wax figure, dude, would they cry oily tears or just serve up an epic Robo replica? I guess we'll just have to wait and see. The technology behind these realistic Expressions is quite impressive, although the X robots use moving electrical units in the robotic faces that can simulate different muscles and movements.
They also have a specialized system for generating Expressions, either by using predefined scenarios or by collecting biological data from human faces. They can steal your smile and use it for themselves, how creepy is that, but facial expressions are not the only thing these robots can do, they can also perform various tasks and skills that require skill and finesse, for example, there was a barista robot that I could create coffee with milk. Art with precision and brilliance, he could draw anything from hearts to flowers to pandas on the foam of your coffee. I have to admit it's cool, there were also humanoid robots that could play basketball and dance like professionals, they could dribble, shoot, pass and shoot with ease.
They also move to the music and perform complex choreography with Grace. They were powered by AI Foundation models that allow them to learn from data and improve their performance over time. They also have greater autonomous decision making capabilities which means they can adapt to different situations and environments, these robots are not just cool toys, they are very intelligent, for example there is an adorable Panda robot called YuYu from UBC, if you say that I'm thirsty, not only will it look cute, but it will actually fetch a drink from the refrigerator. even if you open the bottle and hand it to you, then there is Cloud Ginger for data, a robotic, this guy is connected to the cloud and runs the GPT robot, which is a special robot language creation model.
The good thing is that the Ginger cloud can move. traditional Chinese dances and chat with people in different ways, plus if you speak a different language or have a special dialect, don't worry, you can switch and chat again in your preferred language, but they are also more than just games for those in devices and gadgets, they are getting jobs in real places, like schools, large gathering places, fancy hotels and senior care facilities, ex-robots even say their robots can do the heavy lifting of LIF in different jobs, like cleaning , keep an eye on things, deliver things and check things honestly. this is just them starting imagine a future where these robots are everywhere, in factories, farms or even relaxing in our living rooms, they could end up being our helping friends, our personal tutors or even our friends, but this too raises some questions and concerns.
How will these robots affect our society and economy? How will they affect our jobs and livelihoods? How will they coexist with us in harmony or in conflict? According to China's Vice Minister of Industry and Information Technology, Shu Shaolan, China is a world leader in the robotics industry and has great potential for growth and innovation. He said China aims to create a robot ecosystem that integrates manufacturing, education, healthcare, entertainment and other fields. He also said that China will promote the development of intelligent robots that are safe, reliable,adaptable, collaborative and ethical. Sounds good. on paper, but I'm not so sure about reality.
I mean, don't get me wrong. I think these robots are incredible feats of engineering and creativity, but I also think they're a little scary and disturbing. I mean, do we really need robots that look like? and act like us. Do we really want robots that can do everything we can do? We really trust robots that can think for themselves. One of the most controversial questions in the AI ​​debate is whether AI can ever be conscious, and if so, what does that mean? They deserve rights as humans. I asked all of you this question before and you had some mixed feelings about it.
Some say no way. AIS are just sophisticated machines that don't really feel or think they just imitate us because they aren't. built like us, they can't experience things like us, they are just tools, others believe that if an AI becomes advanced enough, it could have feelings and self-awareness, it doesn't matter if they are made of silicon or cells, it's about what complex and organized that they are, but hey, I'm still curious. Acha is the world's most advanced human-like robot created by a British company called Engineered Arts. In this video I will share with you the latest

updates

about this robot and what. predicts people's future later in the video we will also discuss Microsoft's latest AI device which is an AI backpack that can see and hear everything just like you do and acts as your personal companion.
These are exciting times, aren't they? But before we delve into the details about the Emcha robot, make sure to subscribe to our channel if you haven't already so that you can stay up to date with everything. The latest news and updates in the field of artificial intelligence, as we all know, Acha is a human-shaped robot that can walk, talk and interact with people using artificial intelligence. It has a realistic face that can show a variety of emotions, from happiness to It also has cameras in its eyes that can recognize faces and expressions and can speak more than 100 languages ​​thanks to a powerful artificial intelligence system that uses natural language processing and generation. .
He made headlines last year when he released a series of videos showing his face. Expressions and movements The videos went viral on social media and attracted millions of views. People were fascinated and terrified by how realistic and expressive he was. Some even compared him to the Terminator or the replicants from Blade Runner, so he's here to hurt us or take control. In fact, there is a lot of hope about the future and what can be done to help society. In recent conversations with the media, Acha gave his opinion on what life will be like in 100 years and it sounds quite interesting.
The robot believes that in 100 years there will be people living. By Living a Better Life we ​​will have made great improvements in being sustainable and fair we will have taken great steps towards sustainability and equality and we will have new technologies that will make life easier and more fun at the same time we will create new technologies that will make our lives more easy. and more fun, we could have even traveled to other planets, we may have even ventured beyond the limits of Earth to explore other worlds. He also believes that a future where humans and robots live together would be fantastic, we could learn from each other, work as a team to solve problems and create a better future for everyone and he also believes that robots should also have rights and be treated well as people.
Now this idea of ​​giving robots equal rights is something that bothers me, but it seems like we're getting closer to making this unusual concept a reality, you might be curious to know how achca can act as if it has human thoughts and feelings. achca runs on an artificial intelligence system that uses something called generative models, these models help it understand and speak our language and even control your facial expressions and movements. What does this action mean? It is similar to gp4, one of the most advanced AI models out there. However, these generative models are not perfect, they can make errors and their behavior can be determined by the data with which they have been trained, they do not. do they really understand or reason do they simply process information to act as they do does a mecha really think and feel like us not really but she has some abilities that make her seem more human for example she can pause before answering a question that makes her seem more human?
As if thinking, you can also show signs of emotion and connect with people in a way that feels genuine, but these impressions are not necessarily accurate or reliable, they are based on our perception and interpretation of AA's behavior and appearance, we tend to anthropomorphize machines that look o We act like humans because evolution has programmed us to do so. We project our own feelings and expectations onto them because we want to relate to them. That's why some people love achca and find it upbeat and inspiring, while others hate it and find it creepy or creepy. Aroused public reactions to AA's statements have been mixed and polarized.
Some think she is hopeful and are impressed by what she can do, such as helping teach or caring for the elderly. Others think he is too optimistic and worry that he does jobs that require a lot of patience. Like caring for people with dementia, discussions about acha are part of a larger conversation about the future of artificial intelligence and how it will affect us. As AI advances, we need to think about the good and bad things that could happen. To ensure that AI is safe and useful for everyone, this means that people from all areas, such as government companies and research, must talk and work together.
We need to create rules that ensure AI is innovative and responsible so that Acha is not just a robot, but a glimpse into what the future of AI could look like. That future could be surprising but also risky and it is something we all need to help give. forms the combination of the realms of space travel and robotics J Kim presents innovative creation Leica, a realistic AI-powered companion robot designed for human interaction jihi Kim is a Korean engineer and artist passionate about creating human-robot interactions . It was inspired by the story of Leica, the first animal to orbit the Earth in 1957.
Leica was a stray dog ​​from the streets of Moscow who was sent into space by the Soviet Union as part of the Sputnik 2 mission. Unfortunately, the dog did not survive. trip and its destination sparked a lot of controversy and debate about the ethics of animal testing in space and Leica is not just a toy or a device but almost a living being that can interact with humans and respond to their needs and emotions. When you see it for the first time, you will be surprised by its realistic and almost natural appearance. J Kim aimed to create a robot dog that would fit perfectly into the natural world while avoiding an overly robotic environment. appearance To achieve this, she skillfully combined 3D printing with soft materials for a flexible and smooth body.
He even added fur and whiskers for that extra touch of realism beneath Lea's titanium skeleton that gives him strength and durability while keeping him light and agile. The robot dog can move. Its head, ears, tail and paws in ways that look quite natural, all thanks to its advanced joint movements and sensors such as a comes loaded with a variety of sensors to understand and interact with its environment. It is equipped with depth cameras, thermal imaging microphones, speakers and touch sensors that allow it to see, hear, feel and talk to humans and other robots. Another interesting feature is Leica's ability to check the health and mood of astronauts using its ECG sensors that track heart rate and stress levels.
If Leica detects that someone is feeling depressed or lonely, he can show affection by shaking them. It licks its tail or hugs and when it detects happiness or excitement it can bark, jump or play and this robot is not only a space companion, it is also a useful helper for astronauts, it is designed to do many different jobs that help the crew. . During its space missions with its cameras and sensors, Leica can explore the area and collect important data. He has this flexible handle on the back so he can also carry tools and equipment, plus he can talk to ground control and other robots through his speakers and microphones. even connect to the Internet to get information on the screen mounted on your head.
One of the most interesting things about Leica is how he works with these special mixed reality glasses. G Kim created it to work perfectly with these glasses that show holograms and virtual images in the real world when astronauts wear these glasses they can interact with Leica in a way that it feels more real. They can also view additional information that this robot provides, such as maps, weather updates, and important alerts. These glasses make teaming up with Leica and talking to Ground Control so much better. They are even useful for training as they can simulate different scenarios for astronauts to essentially practice.
Leica is an innovative robot dog with the potential to truly change space travel. He could help astronauts feel less stressed, lonely or bored by helping them with their homework. Leica could also increase your work efficiency It's not just about space, although Leica's experiences could teach and inspire people on Earth, it could even be used in different situations, such as helping in search and rescue operations in disasters or as therapy. Leica pays tribute to the first animal in space. Human creativity and the drive to explore this robot dog represents a new chapter in supporting potentially transforming space travelers.
Our space travel experience, this is how our video concludes. I really hope you found it interesting to learn about Leica, the AI-powered robot dog, so Tesla. just unveiled its latest and greatest project Optimus Gen 2, a humanoid AI robot that looks like a knight in futuristic armor, has near-human hand movements, a lighter weight than its predecessor, and improved walking speed. It also has custom designed actuators and sensors that allow it to perform complex tasks with ease Now, to understand the difference between the new Optimus, we must talk about the original Optimus that Tesla first introduced in August 2021 on its AI day, The idea was to create a robot that could perform tasks. humans don't like helping in factories or dealing with hazardous waste back then Optimus looked more like a person dressed up as a robot than a real robot when he was brought out on stage he waved his hand and did some simple movements that Elon Musk expected to have. a working model for 2022 Since then, Tesla has been busy making Optimus look and act more like a real robot.
In September 2022, they showed two versions on their second AI day. The first was called Bumble C and he walked very carefully, almost as if he was trying not to wake someone up. The other was the Optimus we know today. This version of Optimus had softer, more human-like movements in his hands. He could move his arms. and head on his own but he still couldn't walk on his own so I had to get him up on stage with wheels Elon Musk mentioned that he hoped to start making these robots for Real by 2023, so far things look promising but there are still a lot of things that We don't know about the final version of Optimus, including how it will manage balance and control possibly using Tesla's Autopilot technology.
We're also curious about his ability to interact with humans and adapt to new environments and tasks. Finally, how Optimus will communicate with other robots or devices remains an intriguing question, but before we get too deep, let's focus on some of the key technical features that make Optimus Gen 2 really stand out. The highlight of this robot are its hands. They are designed to move in all directions thanks to something called 11° of freedom. This flexibility allows the hands to perform precise work such as picking up objects. or use tools, what's even more impressive is that these hands can sense touch.
They have a special leather cover that can bend and stretch without damage. This shell is packed with sensors that can sense pressure, temperature, humidity, and even vibrations, making the robot's hands incredibly sensitive. and versatile now the new Optimus Gen 2 robot is truly advanced with its touch sensing, meaning it can sense things around it in a way that no other robot can understand whether something is hot, cold, heavy, light, soft orrough, this helps him interact in a more human-like manner, what's also impressive is how fast he can walk at around 4mph and can move his head and neck on his own which is great to look around or talk.
Elon Musk says that Optimus Gen 2 is smart enough to learn from what's happening around it. find out how to use new things, solve problems or even learn how to chat with new people or animals for Sean, uh, yeah, it's Sean, what's up man? Well, if you do that again you will get hurt, can I accept? your luggage, but what about the Optimus Gen 2 app? How will this robot benefit humanity? Well there are a lot of ways this robot could be really useful in different areas, first in manufacturing think of Optimus as a help in factories it could do things like lift heavy things put together parts or check if things are okay basically it can handle the difficult or boring jobs, then there are construction jobs Optimus could be a game changer on construction sites lifting heavy things digging or cleaning The messes in our homes are offices Optimist could make life easier by doing housework or providing services imagine him bringing things cleaning or even giving foot massages in schools or universities Optimist could intervene as a teacher he could explain difficult topics show interesting experiments or answer students' questions for fun Optimus could be an Entertainer in parks or theaters, he could play games, do jokes or even sing.
Hello, I have some valuables there, can you be careful? Yes, mom, no problem. Oh, come on, come on, man, right now LG, a well-known electronics company, is ready. to surprise everyone at CES

2024

with its latest creation, a small friendly robot designed to make life at home easier and more pleasant, this announcement made just a few hours ago is already generating a great stir, this adorable robot from LG measures approximately one foot tall and walks. On two legs it's not just for show, this little helper is packed with advanced technology to take care of your home, it can roam around your house checking for any problems, keeping an eye on your pets and even understanding how you're feeling to make your day better.
Thank you. Powered by the Qualcomm Robotics RB5 platform, the robot becomes smarter over time, learning about your tastes and habits to become an even better companion. LG refers to this robot as a moving smart home hub aimed at reducing the tasks you have to do around the house and it seems to do just that, which will be on display at CES 2020 before next week, the robot can connect with everyone smart devices in your home, you can turn off the devices when they are not needed or when you are not at home, plus you can patrol your home.
Check if the windows are closed or if the lights are on and alert you when you are away, the robot is not just sitting idle, it can keep an eye on your pets and even allow you to watch live videos to see how your furry one is doing. your friends are doing when you're away, it's also smart enough to monitor your home's temperature, humidity, and air quality, alerting you if something's off, but the robot's abilities don't end there when you get home, greets you at the door. and it can find out how you feel by looking at your face and listening to your voice, depending on your mood, it can play music or sounds to cheer you up, and if you tend to forget things like your medication times, the robot can help remind you to do it.
To do all this, the robot is equipped with a speaker with a camera and several sensors that help it understand what is happening in your home based on the air quality. For potential problems, it also uses voice and image recognition software along with the ability to process natural language to identify and respond to problems as they arise. This robot is part of LG's broader plan to introduce a range of smart home products that aim to improve your life. Whether or not people will feel comfortable with a robot following them and their pets around the house all day remains to be seen.
LG is excited to show off this robot at its CES booth next week, but they haven't shared any details about pricing or when it will be available for purchase, so an innovative company in the field of robotics is using advanced AI. which powers digital tools like GPT chat to create robots that can operate in the physical world. environment, they want to use this technology to create robots that can work and learn in the real world, not just online. Founded by three former Open AI researchers, covariant is advancing the robotics industry by developing software that allows robots to learn and adapt through observation and interaction with their environment, much like how chatbots learn from vast repositories of textual data located in Emeryville, California.
The covariant focuses on improving the capabilities of robots used in warehouses and distribution centers. The essence of their technology lies in providing robots with an understanding of their environment. Allowing them to perform tasks such as picking up moving objects and sorting items with an unprecedented level of sophistication, these robots equipped with the ability to understand English offer the possibility of human interaction that allows operators to communicate with them as if they were conversing with a chatbot. . GPT at the heart of Kavarian innovation is the integration of sensory data from cameras and other sensors with extensive textual data similar to that used to train chatbots.
This combination gives robots a comprehensive understanding of their environment, allowing them to handle objects and situations around them. has not been explicitly programmed to treat, for example a covariant robot can identify and manipulate a banana by understanding instructions given in English even if it has never encountered a banana before, this ability is a significant departure from traditional robotics where machines perform a set limited of tasks. In a highly controlled environment, despite promising advances, covariance technology is not foolproof, robots occasionally make mistakes, such as misunderstanding instructions or dropping objects, these shortcomings highlight the challenges inherent in applying AI to physical tasks where the complexity of the real world introduces variables that are difficult to anticipate and manage, however, covariance work represents a critical step forward in the quest to create smarter adaptive robots capable of operating in diverse environments, from warehouses to manufacturing plants and potentially even in autonomous vehicles.
The basis of karian technology is based on a neural network. A form of AI inspired by the structure of the human brain, these networks learn to recognize patterns and make decisions based on the analysis of large data sets. This learning process is similar to the development of digital AI applications, such as chatbots and image generators, that understand or create content by digesting large amounts of information from the Internet by applying these principles to robotic covariance allows machines to learn both of digital data as well as sensory input from the physical world, blurring the lines between the digital and physical realms.
The ambitious variant C project is backed by substantial financial support. With $222 million in funding underscoring the industry's confidence in its approach, the company's focus on developing software rather than physical robots allows for versatile application of its technology across various sectors and types of robotic systems. This strategy aims to pave the way for broader adoption and innovation in the field of robotics that makes robots more accessible and useful in a wide range of applications. The potential impact of Karian technology on PO is enormous and promises to transform the way robots are deployed in numerous environments, making them more adaptable and able to understand and interact with their environment in a complex covariant way is contributing to a future in which that machines can take on more sophisticated tasks, work alongside humans more effectively, and adapt to new challenges as they arise.
This Vision aligns with the broader trend of integrating AI into our daily lives, where technology does more than just automate. tasks but also enhances human capabilities and experiences Covariant work also raises important questions about the future of work and the role of robots in society As robots become more capable and versatile, they can take on responsibilities currently held by humans Discussions on retraining and education, in addition, the development of intelligent robots highlight ethical considerations such as ensuring safety and managing the potential of the AI ​​system to act unpredictably. Covariant efforts to bring AI technology from the digital world to the physical domain through robotics represent a significant leap.
Advancing the field by combining sensory data with textual data used to train artificial intelligence systems such as chat covariant GPT is allowing robots to understand and interact with their environment in ways that were previously unimaginable, although challenges remain the potential for this technology to revolutionize industries. Robot collaboration and creating new opportunities is immense, guys this is finally happening, there is a new reality where just thinking can make you play games, move things on a screen and even control computers, this is not a distant dream, is happening right now with Elon Musk's neural link. They have just shown the world something that sounds unimaginable, but is real.
A person playing chess on the Internet without moving, simply using his thoughts. This isn't just a cool story, it's a completely new beginning. It is where thought and technology come together, opening doors to things that we. I've only dreamed Here we are on the brink of something huge where dreams Technology and the power of our minds are changing everything, so in a groundbreaking event, Neuralink has captured global attention by livestreaming its first patient using a brain chip to play. online chess patient Nolan arbaugh, 29, was left paralyzed below the shoulders after a tragic diving accident thanks to innovative neur Link technology he can now control a computer cursor with nothing but his thoughts, which reintroduces him to activities like playing his favorite civilization game, six for hours on end, the live stream that aired on mus's X social media platform showed arbau maneuvering through the game of chess on his laptop using the neuralink device.
This technology aims to enable people to interact with computers through thought alone, bypassing the traditional physical interfaces that Musk mentioned earlier. that arbau had been implanted with the neuralink chip in January, Ary, which allowed him to control a computer mouse through thought, according to arbau, the surgery was super easy and he was discharged from the hospital just one day later without any cognitive impairment However, Arbau and Musk are quick to remind everyone that this technology is still in its infancy, while ARA's life has improved significantly, allowing him to participate in activities he thought were lost. He notes that the trip is far from perfect.
The technology is not perfect and they have encountered some Kip Ludwig, former director of neural engineering programs at the US National Institutes of Health, echoed this sentiment, saying that while what neuralink has demonstrated is Far from being a great advance, it represents a vital advance. A step forward for the patient and the field of neural engineering It is still early days for the implant and both Neuralink and Arbau are learning to maximize control and interaction capabilities. However, the fact that arbau can interact with a computer in ways it couldn't before is a positive development that adds to the intrigue. neuralink has been under scrutiny by the US Food and Drug Administration, which found problems with record keeping and quality controls for animal experiments.
This came shortly after neuralink announced that it had received authorization to test its brain implants in humans that the company had not received. Responding to questions about the FDA's findings at the time during the livestream, Arbos shared her amazement at the technology likening it to the Star Wars movies using force to move the cursor on your screen. This intuitive control mechanism has transformed your interaction with technology. Everyday tasks are doable once again. Musk, on the other hand, has been promoting this demonstration as an example of telepathy showing thepotential of humans to control computers solely through mental neurolinks.
The achievements follow similar efforts by other companies such as Australia-based Synchron Synchron. They have also developed a brain-computer interface, although through a less invasive method that does not require surgery on the skull, they successfully implanted their device in a patient in July 2022, indicating growing interest and advancement in the field of the neural technology that Elon Musk took. Twitter to celebrate this milestone calls it a demonstration of telepathy, the ability to control a computer and play video games while thinking happily, this event not only marks a significant step for Neuralink, but also sets the stage for future innovations.
Musk Tea, Blindsight's Next Product Aimed at Restoring Vision for the Born Blind Neurolinks Signaling Ambitious Roadmap Ahead We're just on the verge of a big change in the way we use technology and it's just the beginning for neuralink I'm sure the technology isn't perfect and there are some bumps along the way, but the opportunity to really make a difference in people's lives is huge. Watching Arbaugh play chess with just his thoughts gives us a glimpse of a future where we won't just solve problems with tools and devices, but with the incredible power of our minds. One day after retiring its old hydraulic model, Boston Dynamics has introduced a new electric version of its Atlas robot, marking a significant departure from the noisy hydraulic mechanisms that characterize previous models.
Atlas is now powered by an electric motor which, although not silent, offers quieter operation. Operation that contrasts sharply with the distinctive sounds of the hydraulic system. The unveiling of this new version was captured in a compelling video that begins with Atlas lying face down on a set of interlocking gym mats, accompanied only by the gentle worry of his new electric motor as the camera pans. Around the robot, an apparently natural movement of bending the legs at the knees begins. This movement is initially reminiscent of human action and soon crosses into an uncanny valley that evokes a feeling similar to a scene from a Sam Ry movie.
This clever maneuver allows Atlas to go from lying down. from standing with his back to the camera, then performs a mysterious rotation of his head followed by his torso completing a 180° turn to look directly at the camera. The robot's face features a ring of light around a perfectly round screen and then rotates back like Atlas. Coming off the mats and out of the frame, this new electric Atlas is not only a technological evolution but also a symbolic one that reflects the transformative moment when Bob Dylan switched from acoustic to electric guitar the movement of the new Atlas while still showing some choppy movements.
It is noticeably smoother and more fluid compared to many of the commercial humanoids introduced in recent years. This progress illustrates a level of mechanical sophistication that recalls the confident, fluid movements of another Boston Dynamics robot, suggesting a family resemblance in its physical design philosophies. Atlas' appearance has undergone a radical transformation. The bulky torso and awkward bowed legs covered in protective plates are gone, replaced by a wire-free, streamlined frame that looks more like contemporary robots like Agility Digit and Aptronix Apollo. This redesign leans towards a softer and more accessible version. Aesthetics that give off the intimidating utilitarian look of previous models for a more friendly, cartoonish appearance despite these substantial changes Boston Dynamics has chosen to retain the Atlas name, continuing the legacy of its well-known brand in this new commercial phase.
This decision contrasts with previous practices where names are investigated. were changed after the commercial launch, such as the spot mini

becoming

a spot and the handle

becoming

extendable. In an interview with Tech Crunch, Boston Dynamic CEO Robert later discussed the company's strategy and future plans for Atlas and highlighted that while the brand remains unchanged for now, it could be revised as the project moves forward. mass production. The CEO also outlined the company's timeline, which includes beginning pilot testing of the electric Atlas at Hyundai facilities early next year with broader manufacturing goals set for subsequent years. Plater emphasized the importance of understanding specific use cases to ensure that the investment in robotics is justified by sufficient productivity gains.
He revealed that experiments are already underway with Hyundai, indicating a collaborative effort to refine the design and functionality of the robot. The flexibility and range of motion on the new Atlas are particularly noteworthy. According to Pler, the robot incorporates personalized high technology. -Powered actuators in most joints that provide the power and agility of an elite athlete. This capability is not just for show, it has practical implications in industrial applications where robots must perform complex dynamic tasks reliably. Boston Dynamics is known for its viral videos that often highlight the agility and dexterity of its robots in dramatic and even theatrical ways while these demonstrations are visually impressive and entertaining and also serve a practical purpose.
Showcasing the robot's capabilities in scenarios that could occur in real-world environments, for example, the video depiction of Atlas starting from a prone position and then standing up underscores its ability to recover from falls. An essential feature for maintaining productivity without human intervention in industrial environments, the design options extend to the robot's hands, which now have three fingers instead of the more human-like four or five. This simplification reduces mechanical complexity and improves durability, crucial for the robot head's repetitive industrial tasks. It has also been redesigned. It now features a large round screen that adds a touch of friendliness and improves interaction capabilities, which will be essential as robots increasingly work alongside humans.
Overall, the new Atlas Electric is a big step forward in the world of humanoid robots that combine high-tech features with intelligent designs to meet the growing needs of businesses and factories as Boston Dynamics continues to improve this robot. Everyone in the industry will be watching closely and is eager to see how these new features will actually help improve efficiency and productivity in the real world. So a company called Sanctuary AI just announced a major collaboration with none other than Microsoft and it's all about accelerating the development of next-generation AI models for general-purpose robots. This is huge now for those who are not familiar with Sanctuary AI, let me give you a quick summary.
These people have an ambitious mission to create the world's first human intelligence in general-purpose robots. Yes, we're talking about artificial general intelligence or AGI, the Holy Grail of AI that can truly understand and interact with the world like us humans, but here's the Sanctuary AI isn't just a Scrappy startup talking big. stuff, these guys have some serious pedigree, we're talking founders of pioneering companies like d-wave, the first commercial quantum computing company, Kindred, which introduced reinforcement learning to production robots, and the creative destruction lab. Known for commercializing innovative science, his team is filled with veterans from Tech Titans like Amazon Microsoft and SoftBank robotics;
In other words, these people have been at the forefront of some of the most ground-breaking innovations in robotics and AI computing, so when they make bold claims about pushing the limits of AGI, you can't help but take them seriously, so... What exactly does this collaboration with Microsoft entail? It is about harnessing the power of Microsoft's Azure cloud to boost the development of what Sanctuary AI calls large behavioral models or what lbms thinks about. These are Evolution Beyond's next large language models, such as GPT AI systems, designed to learn and understand real-world experiences and interactions by leveraging Azure's massive computing resources to train inference and storage networks.
Sanctuary AI can boost your efforts in developing these cutting-edge lbms that are critical steps on the path to achieving AGI just a few weeks ago Sanctuary AI unveiled the seventh generation of their flagship Phoenix robot and the upgrades and improvements they have included in this latest iteration are nothing short of mind-blowing, we're talking increased uptime enhanced visual perception enhanced tactile sensing and get this an expanded range of human motion in the wrists, hands and elbows this thing is designed to mimic the dexterity and range of motion of a real human being, which is crazy when you think about it and here's the interesting thing Sanctuary AI has made all these advances in just 11 months since their sixth generation Phoenix robot, which was one of the best inventions of 2023, they have advanced very quickly and it's almost hard to believe, according to the CEO of Sanctuary AI.
Jordi Rose, this 7th generation Phoenix is ​​one of the most sophisticated human behavior data capture technologies available today, essentially providing your carbon AI control system with some of the highest fidelity and quality training data ever. exist, which is essential for developing those advanced lbms and moving towards AGI, but perhaps the most mind-blowing statistic of all is that the time it takes to automate new tasks has been reduced from weeks to less than 24 hours, which is an increase in 50 times the speed of automation, marking a turning point in the capabilities of autonomous systems. that sink in for a second, automating complex tasks in less than a day, the potential applications in industries like manufacturing logistics and more are now staggering.
Sanctuary AI is making some very bold claims here by claiming that this 7th generation system is the closest analogue to a human of any robot available today, that's a strong claim to be sure, but coming from a team with the pedigree and the track record that these people have, you can't just dismiss it out of hand, of course, with any innovative technology, there are always ethical considerations and potential risks to weigh. The development of advanced AI and robotics is no exception, as concerns around safety transparency, unintended consequences, and impact on human work are to its credit.
Sanctuary AI appears well aware of these issues and has emphasized a commitment to responsible development and addressing workplace challenges. in a way that augments human workers, not replaces them, the Sanctuary name itself is intended to convey a safe and controlled environment to responsibly nurture AI before commercial deployment. Make no mistake, the road ahead will surely be filled with immense challenges both technical and ethical, but if companies like Sanctuary Ai and its all-star teams can continue this breakneck pace of innovation while keeping responsible development at the forefront; We may witness some truly mind-blowing advancements in the coming years in the era of ubiquitous multipurpose robotic assistance powered by human-like artificial devices.
Intelligence is on the horizon and with visionary pioneers and powerful partnerships like this leading the charge the future could be much closer than any of us could have imagined, so Unry just released the trailer for their new humanoid robot unry G1 and this is the mind. -The flexibility, mastery and overall capabilities of this robot are really something else. I'm going to explain and show you exactly why this robot is a game changer, so there's a lot to unpack here and first of all, this robot is genuinely revolutionary. You may have noticed that it looks a bit like the Atlas robot recently introduced by Boston Dynamics.
I don't think Unry managed to create this robot in just 3 weeks since the trailer came out. Something like this takes a long time to develop, but what? What it does show is that Unry is evolving faster than many people expected and they are offering some truly impressive technology at a relatively affordable price. The G1 starts at just $166,000, which is cheaper than any other humanoid robot with these types of capabilities. One of the things that really caught my attention in this demo was the incredible flexibility and range of motion, as you can see here. This robot can turn in a way that humans simply cannot and its legs have a range ofmuch greater movement.
We have also managed to internalize the wiring using hollow junction technology which allows for more efficient movement and a more compact design. This trailer really highlights the fact that the robot race is on. I don't think anyone predicted that Unry would release something like this, so I would have expected it to come first from a company like Boston Dynamics, and while Boston Dynamics could be using hydraulic actuators or all-electric systems, the g1's motion capabilities suggest that You will be able to handle a wide range of tasks that we may not be able to. I even considered another thing that really impressed me was the stability of the robot when receiving impacts.
Now I'm not saying that we should go around beating up robots that might not end well if they decide to rebel against us in the future, but in all seriousness the fact is. That this robot can take a solid hit to the chest and remain completely stable is a remarkable feat of engineering. In real-world applications, robots may encounter things like wind debris or unexpected collisions, and being able to stay upright and continue functioning in those situations is hugely important. most humans would probably fall to the ground if they took a hit like that, so this is a really exciting development.
We've seen a similar demonstration of stability from Unry before with their H1 robot, but the G1 takes it to a whole new level if you try this. With most other humanoid robots on the market, they would fall over in an instant. The fact that unry has achieved this level of stability without even using external support cables is mind-blowing. Usually in these types of robotics demonstrations you will see the robot tied to the ceiling or something to prevent falls because balance is one of the biggest challenges in the early stages of development, but Unry has created a Rob robot that can take hits and stand upright with all the wiring hidden internally, which is a huge achievement, the speed of the G1 is also really impressive: it can reach speeds of up to 3.3 m/s, which is just shy of the world record set by the previous robot of the unit.
For context, it's about 2 m/s, which is almost a race pace we haven't seen at all. Much like other humanoid robotics companies, the Tesla robot, for example, can walk quite smoothly, but it doesn't move as fast as this one and I can't imagine they'd be too eager for someone to kick it to test its stability, so In terms of overall performance, I would say that the Unry G1 is probably one of the best humanoid robots out there right now. One of the things that absolutely amazed me was the robot's ability to learn tasks through simulation and then apply them in the real world.
Using imitation and reinforcement learning, they can train the robot to perform complex actions in a simulated environment like Nvidia's Isaac S Dim and then transfer those skills to the physical robot. By watching it move, you can see how precise and subtle the joint movements are and the fact that it can maintain its balance while performing these tasks autonomously. It's amazing, this has huge implications for robotics research because until now most humanoid platforms have been very expensive, making them difficult for many laboratories and universities to access. The Unitri H1, for example, was a great research platform, but its high cost was a big barrier with the G1 coming in at a lower price, we will see many more people get their hands on this technology and push the limits of what It is possible and then there are the previous hands.
Unitary robots were criticized for their lack of dexterous manipulators, but the G1 has hands that can grasp and manipulate objects with incredible precision. You can crush a nut, flip a piece of bread, and even open a can of soda like it's nothing. The level of control and precision. The motor skills on display here are just crazy. They also mentioned something called a unitary robot, unified large model, which appears to be some kind of artificial intelligence system that allows the robot to perform these tasks autonomously. There aren't many details available yet, but no. I assume it's a machine learning model that has been trained with a huge data set of simulated and real-world interactions as more information comes out.
I'll be sure to update you. One of the other key things they demonstrate is the robot's ability to perform very precise movements such as soldering electronic components, this opens up a whole new range of potential applications in fields such as manufacturing and assembly, but perhaps the most interesting thing about the G1 is its priced at just $116,000 for the base model; It is almost 10 times cheaper than the previous humanoid unit. robot that was listed at around $150,000, that's a big jump in affordability and will make this technology accessible to a much wider range of researchers and developers. We've already seen some of the amazing things people have been doing with the UN H1 platform during For example, researchers from Shanghai Tech and Mars Lab recently presented a framework for learning humanoid parkour that allows the robot to navigate difficult terrain , jump to high platforms, jump over obstacles and more, all using only vision and proception.
It is a completely end-to-end system, which is a great achievement in the field of robotics, the problem is that many academic laboratories struggle to get funding for this type of research because they simply cannot afford the expensive hardware, but with The G1 we will see an explosion of new projects and discoveries. It means the H1 could do a full backflip, which is simply mind-blowing for a robot with such basic actuators. Imagine what people are going to do with the advanced capabilities of the g1. In short, the unitary G1 is an absolute GameChanger for its combination of flexibility, stability, dexterity, and affordability. is unparalleled and will open up a whole new world of possibilities in the field of robotics.
I can't wait to see what researchers and developers do with this platform once it starts shipping in the coming months. The robotics race is getting hot and cold. has just proven that China is a serious contender on the global stage, they are innovating at an incredible rate and making this technology accessible in a way no one else has before, so stay tuned. I have a feeling that we will see many more innovative developments from them in the very near future. Here is some interesting news from the United Arab Emirates. This announcement comes from the Abu Dhabi Technology Innovation Institute, also known as tii.
They have launched Falcon 2 series with two awesome models Falcon 211b and Falcon 211b vlm, so Falcon 211b is as they say a super powerful text based AI designed to understand and generate human-like text, meanwhile Falcon 211b VM is a language model view, this means you can take an image and generate a detailed text description. Let's talk about why the UAE is making such big moves in AI, known for its vast oil reserves, the UAE is now investing massively in artificial intelligence. This change has not gone unnoticed; In fact, US officials took note last year and issued a strong ultimatum for the UAE to choose between US and Chinese technology in response.
Emirati artificial intelligence company g42 cut ties. With Chinese companies paving the way for a massive $1.5 billion investment by Microsoft, this strategic pivot was carefully coordinated with Washington, demonstrating the UAE's serious commitment to advancing the artificial intelligence space. The Falcon 2 launch comes as countries and companies around the world rush to develop their own. large language models, and while some companies are keeping their AI technology proprietary, others like UAE's Falcon and Meta's Llama are going open source for the future. The UAE's significant investments in artificial intelligence are positioning the country as a major player on the global technology stage.
The Falcon 2 Series is just the beginning with the anticipated Falcon 3 already in the works, so a company called robot era has introduced a new humanoid robot with AI and it's pretty crazy this robot called x, and apparently it's a game changer in the world of embedded AI. AI is about creating robots that can interact with the real world just like humans do and this x-and robot seems to be taking things to the next level, so it has 12 active degrees of freedom, which means it can moving and bending of all kinds. In many ways, just like human bodies, it also has built-in touch sensors so you can really sense and feel the things you touch and control.
Accuracy is supposed to be top-notch, allowing for smooth and reliable movements, but here's the kicker robot era claiming that the technology behind xand is completely self-developed, which is pretty impressive for a relatively new player to the game of robotics and artificial intelligence. They were only founded in 2023, but they are already pushing the limits of what is possible now that we don't know how much this robot will achieve. the cost or when it will be available to the public because the robot age is keeping those details under wraps for now, but from what they've shown us, it seems that xand is designed specifically to incorporate AI, which could be a big problem as that we move forward. closer to achieving AGI and speaking of AGI, there has been a lot of buzz lately that this could happen sooner rather than later.
I'm not even kidding, some experts say we could be on the verge of a breakthrough in this field. Now let's talk a little bit more about the era of robots and what they've been doing, they're focused on developing general purpose humanoid robots, which is pretty ambitious. One of their main projects is called humanoid gym, which uses reinforcement learning to train robots to perform tasks in a certain way. which mimics human capabilities, they are using this cool concept called Zero Shot to Real Transfer Simulation, which basically means they can train the robots in a simulated environment first and then deploy them in the real world without any additional training, but enough is enough. to talk, let's check it out.
See this x-and robot in action and see what it can really do. Robot Era has released a presentation video and I must say that the capabilities they are demonstrating are quite fascinating. Let's watch that footage and see what this bad boy can do. You may remember a while ago when I covered Age of Robots, that ambitious startup that introduced its crazy, dexterous Robot X and AI. Well, they just outdid themselves in a major way by becoming the first company to have a full-sized humanoid robot. on the Great Wall of China, so its latest robot called Xbox L conquered one of the most iconic and challenging landscapes in the world, the ancient stone paths and steep stairs of the damned Great Wall of China, this is a great milestone not only for robots but also for humanoid robotics as a whole, a 5'5 165 cm tall robot that looked like a slimmed down Terminator advancing across those irregular surfaces of centuries-old stone without hesitation, waving, practicing movement movements.
Kung Fu, he went up and down stairs all the time. n yards and to be honest the Great Wall is not a walk in the park, even for us humans, those roads are cracked, the stairs are very steep with no handrails, there are random potholes everywhere, it's an obstacle course Designed to trip up any bipedal robot, except the xbot. L handled everything with surprising ease thanks to Robo tera's advanced perceptual reinforcement learning algorithms. Basically, this robot can perceive and make sense of its environment in real time using sensors and artificial intelligence. It can identify obstacles, changes in terrain and literally adapt your walking gate and your balance. on the fly to deal with whatever the Great Wall throws at it, according to UA Shei, one of the co-founders, its perceptual reinforcement learning technology gives the xpot L something like human-level consciousness and decision-making capabilities. making decisions in unknown environments, the robot can essentially think. by itself about the best way to navigate complex areas safely and efficiently and we're not just talking about some basic pre-programmed movements here, the xot L was climbing the Castle's steep stairs dealing with slopes and inclines avoiding obstacles adapting to conditions low-light, all without any human guidance or help was basically achieving the kind of intelligent, situation-aware adaptive movement we associate with people. robota says this was one of the biggest challenges in developingend-to-end artificial intelligence algorithms that could translate perceived data from the robot's sensors into accurate and stable real-world locomotion through an ultra-complex environment like the Great Wall sounds like they cracked that nut, although according to this demo, the really crazy thing is that this Great Wall feat comes just a couple of weeks after they showed off the dexterous xy robot I mentioned.
Before, between these two products, it is clear that the robot era is focused on building robots with built-in AI with human-like physical capabilities to operate in our world and they are also not wasting their time even though they were founded in 2023, they have already invested heavily in Cutting Edge. Artificial intelligence technology, such as reinforcement learning, neural networks, simulation, real transfer and, more basically, everything necessary to create robots that can approach the real world just as we do, seems to aim to maintain that competitive advantage by iterating and producing smarter and more capable humanoid robot products at the same time.
At a rapid pace and they want to continue perfecting ways to transfer all that AI training from simulations to physical machines, allowing those robots to become more versatile in real-life scenarios. The ultimate goal appears to be the development of general-purpose, ultra-flexible humanoid robots for countless valuable applications across industries. such as manufacturing healthcare services and more, I have to give props to the Terra robot by deliberately putting your robot on the Great Wall, as some kind of extreme stress test shows great confidence in your technology, most companies would avoid something so risky with a new brand. product, but the fact that the xbot L was able to pull it off so smoothly is really impressive.
It's an incredible proof of concept for their embedded AI approach and perceptual reinforcement learning systems that work in one of the toughest real-world environments. To me, this demonstrates the excellent capability of the tera robot. Core capabilities in fields like robotics, Ai mechatronics, and more are clearly at the forefront when it comes to embedded intelligence and making it work reliably in the real world. I'm looking forward to whatever crazy robots they come up with next if they continue to raise the bar in this way they could legitimately help usher in the era of advanced human-like robots that can operate autonomously in our world at an unprecedented level.
We are potentially on the cusp of some major advances in artificial general intelligence, or AGI, becoming a reality. in the next few years and if anyone can build the physical robotic bodies to house and manifest that future AGI, they could just be an ambitious little pioneer like roboterra when it comes to developing super advanced AI systems like AGI. The situation with whistleblowers at open AI fairs. The high stakes: Roa's achievement with his robot on the Great Wall is definitely impressive from the perspective of built-in AI, but achievements like that might seem small compared to both the positive and negative impacts of creating a general Artificial Intelligence at the same level. as human cognition might have on society, on the one hand, AGI represents this incredible technological breakthrough that could help solve countless challenges facing humanity, from unsolved scientific mysteries to global crises, a widespread AI mind that matches the Human intelligence has always been seen as something that could be a catalyst for transformative advances in all fields, but on the other hand, the existential risks of an unaligned superintelligent AGI that is smarter than humans cannot be overstated;
We are talking about the potential for an advanced AI system to break free from human control with catastrophic consequences. consequences for our species and the planet if it is not developed safely and responsibly, open artificial intelligence experts are sounding the alarm. On this exact scenario, they allege that the company is recklessly rushing towards AGI supremacy without taking proper safety precautions, driven more by competitive pressures to be first rather than making the right way to damning claims like open AI, ignoring its own security review processes, muzzling employees with really strict non-disparagement agreements, and deprioritizing security work after founding members raised concerns paint a very troubling picture, if true, one of the Major AI labs are taking an extremely risky and unethical approach in this existential race for AGI.
The fact that respected researchers like Ilia Suk felt compelled to design due to open AI's perceived lack of commitment to keeping AGI safe and aligned only adds more credibility to the whistleblower's claim. In my opinion, your letter calling for greater transparency to protect employees and ultimately have the government regulate this powerful technology to keep bad actors in check seems like a reasonable and perhaps necessary step to prevent potential disaster scenarios because, let's face it, we simply can't afford it. a situation where the first super-intelligent AGI to emerge is catastrophically misaligned with human ethics and values, exercising restraint and prioritizing responsible development should be top priorities, of course the whistleblowers probably have some level of bias given their points of philosophical point of view and its links with effective altruistic movements.
I would therefore recommend taking a more neutral stance personally, but even allowing for that bias, the magnitude of what is at stake with AGI certainly warrants a strong commitment to prioritizing safety measures and ethical boundaries over pressures. competitive and commercial interests of companies such as open AI. Failure to properly align an emerging AGI could risk irreparably disrupting human civilization as we know it, and any private entity that recklessly ignores those concerns in its quest to be first is essentially playing a potential extinction-level game of Russian roulette. with the future of humanity, so while I'm impressed by the advances in AI and cutting-edge robotics from companies like Roboterra.
I really hope that the AI ​​community at large, whether researchers, EX Ives policymakers, or others, is taking AI whistleblowers' outspoken warnings about these existential risks just as seriously because we may only have one chance. Develop AGI in a controlled and responsible manner that ensures this innovative technology remains an asset that allows Humanity to prosper rather than an unintended catalyst for our potential downfall. The stakes are high, okay, don't forget to hit the subscribe button for more updates. Thanks for tuning in and we'll see you next time.

If you have any copyright issue, please Contact