Amazon Takes “A Significant Leap in Robotics” with Touch-Sensitive Devices

Amazon announced a significant advancement in robotics, having developed a robot equipped with tactile sensors capable of grasping approximately three-quarters of items in its expansive warehouse.

During the “future delivery” event held on Wednesday in Dortmund, Germany, the US company unveiled plans to deploy this technology globally over the next few years. The aim is to assist humans in sorting items for storage and preparing them for delivery, amidst the growing operations of online retailers.

Aaron Parness, the Robotics Director at Amazon, referred to Vulcan as “a major leap in robotics,” highlighting the robot’s ability to not only observe its environment but also to perceive it through touch, facilitating tasks previously deemed impossible for Amazon robots.


These robots can identify objects through touch, utilizing AI technology. They work collaboratively with humans who handle and retrieve items from shelves to assist them at picking stations equipped with wheeled robots.

Vulcan robots facilitate storage in shelving units at both the top and bottom levels, referred to as pods, eliminating the need for workers to use ladders. Currently, Amazon’s warehouse robots can employ suction cups and computer vision to manipulate and select items.

Such advancements may evoke concerns about job losses, as retailers reduce human labor in distribution centers that employ thousands.

Many retailers are increasing automation investments due to rising global labor costs. Amazon faces industrial challenges beyond just low wages in its UK warehouses.

Goldman Sachs economists predicted in 2023 that as many as 300 million jobs globally could be eliminated by 2030 due to the rise of generative AI, fundamentally altering various roles.

In the UK, estimates suggest that between 60,000 and 275,000 jobs could vanish annually over the next decade amidst ongoing upheaval, as proposed by the Tony Blair Institute.

Nonetheless, Tye Brady, Amazon’s lead robotics engineer, asserted that robots cannot entirely replace humans in their facilities, stating that they “enhance human potential” to improve workplace safety. He humorously referenced his affection for R2D2, likening their supportive design to that of a “cooperative robot.”

“Humans will always be part of the equation,” he noted, explaining that robots take on “menial, mundane, and repetitive tasks.”

“Complete automation isn’t feasible just yet. We will always require human oversight to understand operational value.”

He also emphasized that individuals play a critical role in safeguarding against potential hacking, especially after incidents like the cyber attack that disrupted Marks and Spencer’s online services.

Skip past newsletter promotions

“Machines can detect hacks, but human intervention is often what reveals them, making it beneficial to have people involved,” said Brady.

He also noted that humans excel at identifying minor issues, such as package damage or leaks during delivery that could disrupt the system.

According to Brady, AI is enhancing robot development, allowing them to navigate complex spaces autonomously while learning to move safely alongside humans and other objects. He highlighted that the latest generation of robots can “seek help” and adapt to new methods effectively.

“It’s thrilling to integrate both cognition and physical capability,” he said. “We’re just starting this exciting journey.”

For instance, Amazon plans to incorporate technology leveraging machine learning and automation to create customized packaging that minimizes waste. By the end of this year, over 70 machines will be operational in Germany, the UK, France, Italy, and Spain, with more planned by 2027.

This announcement coincides with Amazon’s launch of a budget-friendly delivery service in the UK, featuring thousands of products priced under £20, as the company takes over low-cost competitors Sheen and Tem.

Source: www.theguardian.com

Fungal Networks Enhance Robotics Through Scientist’s Innovations

In today’s society, there is a growing interest in artificial intelligence (AI) and robotics due to their potential to enhance workflow, communication, and technical capabilities. However, researchers are faced with the challenge of adapting robots quickly to external stimuli for more fluid movement in their environments. To achieve this, scientists are exploring the intricate systems of brain cells that communicate through neural networks.

A team of researchers from Cornell University aimed to address limitations in robotics that computer programs have struggled with, such as short lifespan, intensive maintenance, and low responsiveness to environmental changes. They investigated the potential of improving biohybrid neural networks using living materials combined with synthetic materials to enable faster reactions to unpredictable situations and problem-solving in robots.

Previous studies have utilized neural networks based on animal and plant cells to enhance robot movement and environmental responsiveness. However, maintaining these cells in artificial environments can be challenging and requires extensive care. The researchers in this study focused on using a more robust non-animal system based on fungi, which transmit information through electrical signals similar to animals.

Fungi create mycelial networks to transport nutrients, detect signals, and respond to environmental cues, making them resilient and less susceptible to contamination compared to animal cells. The researchers built two robots—one with independent arm movements and the other with forward-backward motion—and integrated the Eryngium mushroom fungus into their control boards to observe natural electrical signals and responses to stimuli.

By growing the fungi on the robot’s control interface and analyzing the bioelectrical signals, the researchers discovered that the network effectively controlled the robot’s functions. They also observed the fungus’s response to different light stimuli, leading to the conclusion that fungal biohybridization could revolutionize robotics with its adaptability and sensory capabilities.

The researchers conducted experiments to test the robot’s reaction to ultraviolet light, showcasing the fungus’s ability to control the robot’s movements solely through natural electrical signals. They proposed that fungal biohybridization offers a promising avenue for advancing robotics by leveraging fungi’s resilience and sensory capabilities for improved adaptability and reliability.


Post views: 113

Source: sciworthy.com

Interview with Ken Goldberg: Robotics Expert from UC Berkeley

In the coming weeks, TechCrunch’s robotics newsletter, Actuator, will feature Q&As with some of the top people in robotics. Subscribe here for future updates.

Part 1: Matthew Johnson Roberson of CMU
part 2: Max Bajracharya and Russ Tedrake of Toyota Research Institute
Part 3: Dhruv Batra in Mehta
Part 4: Aaron Sanders of Boston Dynamics

Ken Goldberg is a professor at the University of California, Berkeley, the William S. Floyd Jr. Distinguished Chair in Engineering, and co-founder and principal scientist of a robotic package sorting startup. He is ambidextrous and a Fellow of the IEEE.

What role will generative AI play in the future of robotics?

Although the rumors started a little early, 2023 will be remembered as the year generative AI transformed robotics. Large-scale language models like ChatGPT allow robots and humans to communicate in natural language. Words have evolved over time to express useful concepts, from “chair” to “chocolate” to “charisma.” Robotics engineers have also discovered a great visual language.action The model facilitates robot recognition and can be trained to control the robot’s arm and leg movements. Training requires vast amounts of data, so labs around the world are now collaborating and sharing data. The results continue to emerge, and although there are still open questions about generalizability, the impact will be profound.

Another interesting topic is “multimodal models” in two senses of multimodal.

  • Multimodal, combining different input modes such as visual and verbal. This has now been expanded to include tactile and depth sensing, as well as robotic actions.
  • Multimodal in that it allows different actions depending on the same input state. This is surprisingly common in robotics. For example, there are many ways to grasp objects. Standard deep models “average” these grasp actions, which can produce very poor grasps. One of his very attractive methods for preserving multimodal actions is Diffusion Policies, currently developed by Shuran Song at Stanford University.

What do you think about the humanoid form factor?

I’ve always been skeptical of humanoid and legged robots because they can be too sensational and inefficient, but after seeing the latest humanoid and quadrupedal robots from Boston Dynamics, Agility, and Unitree, I reconsidered. Tesla has the engineering skills to develop low-cost motor and gear systems at scale. Legged robots have many advantages over wheels when navigating steps, debris, and rugs in homes and factories. Although two-handed (two-armed) robots are essential for many tasks, I still believe that a simple gripper is more reliable and cost-effective than his five-fingered robot hand.

What will be the next major category of robots after manufacturing and warehousing?

After the recent union wage settlement, I think we’ll see even more robots in manufacturing and warehouses than we do now. Recent advances in self-driving taxis have been impressive, especially in San Francisco, where driving conditions are more complex than in Phoenix. But I’m not convinced they are cost effective. For robot-assisted surgery, researchers are exploring “enhanced dexterity,” or the ability of robots to improve surgical skills by performing low-level subtasks such as suturing.

How far have true general-purpose robots evolved?

I don’t think we’ll see true AGI or general-purpose robots in the near future. No roboticist I know is worried about robots taking their jobs or taking over.

Will household robots (beyond vacuum cleaners) become commonplace within the next 10 years?

I predict that within the next 10 years, we’ll see affordable household robots that can pick up clothes, toys, trash, etc. from the floor and put them in the appropriate bins. Like today’s vacuum cleaners, these robots will occasionally make mistakes, but the benefits to parents and seniors will likely outweigh the risks.

What are some important robotics stories/trends that aren’t getting enough coverage?

Robot motion planning. This is one of the oldest subjects in robotics and is how to control the joints of motors to move robotic tools and avoid obstacles. Many people think that this problem has been resolved, but it is not resolved yet.

Robotic “singularity” is a fundamental problem for all robotic arms. They are very different from Kurzweil’s hypothetical point at which AI will surpass humans. A robot singularity is a point in space where the robot stops unexpectedly and must be manually reset by a human operator. The singularity arises from the calculations required to convert the desired linear motion of the gripper into the corresponding motion of each of his six robot joints and her motors. At certain points in space, this transformation becomes unstable (similar to a divide-by-zero error) and the robot must be reset.

For repetitive robot motions, singularities can be avoided through tedious manual fine-tuning to adjust the repetitive robot motions to avoid encountering singularities. Once a behavior like this is determined, it is repeated over and over again. However, singularities are common in a growing generation of applications where robot movements are non-repetitive, such as palletizing, bin picking, order processing, and package sorting. These are well-known fundamental problems because they disrupt the robot’s operation at unexpected times, often several times an hour. I co-founded a new startup, Jacobi Robotics. Implement efficient algorithms that are “guaranteed” to avoid singularities. This significantly increases the reliability and productivity of all robots.

Source: techcrunch.com

An Interview with Nvidia’s Deepu Talla on Robotics Technology

A version of this Q&A first appeared in Actuator, TechCrunch’s free robotics newsletter. Subscribe here.

We conclude our year-end robotics Q&A series with this entry by Deepu Talla. In October, he visited NVIDIA’s Bay Area headquarters. Talla has served as Vice President and General Manager of Embedded & Edge Computing for this leading chip company for over 10 years. He provides unique insight into the current state and future direction of robotics in 2023. Over the past few years, NVIDIA has established the leading platform for robotics simulation, prototyping, and deployment.

Previous Q&A:

Image credits: Nvidia

What role will generative AI play in the future of robotics?

We are already seeing productivity gains from generative AI across a variety of industries. It is clear that the impact of GenAI will be transformative across robotics, from simulation to design and more.

  • Simulation: Models can now accelerate simulation development by bridging the gap between 3D technical artists and developers by building scenes, building environments, and generating assets. These GenAI assets will see increased use in synthetic data generation, robotic skill training, and software testing.
  • Multimodal AI: Transformer-based models improve robots’ ability to better understand the world around them, allowing them to operate in more environments and complete complex tasks.
  • Robot (re)programming: Improves the ability to define tasks and functions in a simple language to make robots more versatile/multipurpose.
  • Design: Novel mechanical designs (end effectors, etc.) to improve efficiency.

What do you think about the humanoid form factor?

Designing autonomous robots is difficult. Humanoids are even more difficult. Unlike most of his AMRs, which primarily understand floor-level obstacles, humanoids are locomotion manipulators that require multimodal AI to have a deeper understanding of their surrounding environment. It requires a huge amount of sensor processing, advanced control, and skill execution.

Breakthroughs in generative AI capabilities for building foundational models are making the robotic skills needed for humanoids more commonplace. In parallel, we are also seeing advances in simulation that can train AI-based control and perception systems.

What will be the next major category of robots after manufacturing and warehousing?

Markets where companies are feeling the effects of labor shortages and demographic changes will continue to coincide with corresponding robotic opportunities. This spans robotics companies across a variety of industries, from agriculture to last-mile delivery to retail and more.

The main challenge in building various categories of autonomous robots is building the 3D virtual world needed to simulate and test the stack. Again, generative AI helps by allowing developers to build realistic simulation environments faster. Integrating AI into robotics will enable greater automation in environments that are less active and “robot friendly.”

How far have true general-purpose robots evolved?

We continue to see robots becoming more intelligent and able to perform multiple tasks in a given environment. We hope to continue to focus on mission-specific issues while making it more generalizable. True universal embodied autonomy is even further afield.

Will household robots (beyond vacuum cleaners) become commonplace within the next 10 years?

We plan to make useful personal assistants, lawn mowers, and robots available for joint use to assist the elderly.

The trade-offs that have hampered home robots so far are between how much someone will pay for their robot and whether the robot provides that value. Robot vacuums have long offered value for their price point, which is why they’re growing in popularity.

And as robots become smarter, having an intuitive user interface will be key to increasing adoption. Robots that can map their environment and receive voice commands will be easier for home consumers to use than robots that require programming.

The next category to take off will likely focus on things like automated outdoor lawn care. Other domestic robots, such as personal/healthcare assistants, also hold promise, but must address some of the indoor challenges encountered in dynamic, unstructured home environments.

What are some important robotics stories/trends that aren’t getting enough coverage?

The need for a platform approach. Many robotics startups are unable to scale because they develop robots suited to specific tasks or environments. For large-scale commercial realization, it is important to develop more versatile robots. That is, robots that can quickly add new skills or bring existing skills to new environments.

Robotics engineers need a platform with tools and libraries to train and test AI for robotics. The platform should provide simulation capabilities to train models, generate synthetic data, run the entire robotics software stack, and be able to run modern generative AI models directly on the robot.

Tomorrow’s successful startups and robotics companies will need to focus on developing new robotic skills and automation tasks, and take full advantage of available end-to-end development platforms.

Source: techcrunch.com

$14 million investment to develop long-lasting cement, robotics, and AI technology for small service industry businesses

Builders, bakers, and body conditioners may not be the first professions that come to mind when you think of how AI is changing the way we work. But today, growing interest in the company is driving healthy funding for startups building AI-powered business tools, especially for small businesses and the thousands of other categories that make up the service industry world. announced a funding round. product.

durable — Vancouver, Canada-based startup builds an AI website creator and a host of other AI-powered tools to help small business owners plan, create, and run business apps more easily — Series A We have raised $14 million which will be used to continue expanding our platform and customer base.

This round is not the largest Series A, but it comes with an interesting list of investors. Spark Capital led the round, along with Torch Capital, Altman Capital (a VC founded and managed by Jack Altman, brother of OpenAI’s Sam Altman), Dash Fund, South Park Commons, Infinity Ventures, Soma Capital ( All previous supporters) are participating. are also participating. The startup has now raised a total of $20 million.

Durable’s AI-powered website builder is aimed at people with a very novice online presence and has already been used to create more than 6 million websites since its launch a year ago. That’s what it means.

“We have a lot of traditional companies that have been around for a long time but don’t have an online presence. They don’t have the software, they don’t have the systems. That’s a big part of our customer base. ,” founder and CEO James Clift said in an interview. “Plumbers, skilled craftsmen, personal trainers. A lot of businesses with one to six people don’t have the time or resources to actually build an online presence or create marketing materials.”

Durable will continue to build on that momentum and leverage advances in the world of AI to build more tools for users.

The end goal, Clift said, is an omniscient assistant that not only answers users’ questions, but also proactively suggests ways to run their business better.

Clift said in an interview that a beta version of its “automated proactive assistant” will be released “soon,” likely within about three months.

Based on the different needs of a user’s specific profile (a baker may not want or need the same information as a body conditioner or a builder), we can train it in areas such as taxes. ” he said. “You press a button and your business runs in the background. He texts you once a day, and you have work booked on your calendar, so all you have to do is show up to work.”

Other tools Durable has built to complement its flagship website builder include a CRM platform, an invoicing service, a blog builder, and a precursor to Proactive Assistant, an AI bot that allows users to ask questions relevant to their business. there is. Her AI assistant uses LLM’s OpenAI, among other things.

The gap in the market that Durable is filling is actually a well-known one in the technology world.

Small businesses and sole proprietors have been an elusive target for startups developing business tools. Despite accounting for more than 99% of his total business in markets like we and EnglandSmall businesses are more complex users to litigate because they spend less individually than larger businesses (making ROI per customer harder for vendors) and are generally a fragmented population when it comes to their technology needs. This is a group of

Of course, none of the above is new information in the world of technology. There are dozens of startups and large tech companies targeting small and medium-sized businesses, especially those in the service industry and building apps to manage everything from teams, accounting, banking, payroll, and more.

Clift said Durable’s unique selling point is that it applies advances in AI to problems to bring small business owners and employees into the modern era.

In his view, AI has a democratizing role. First, SMBs now have access to more affordable tools that were previously out of reach. For example, Durable works to create a logo and branding builder for its users, but if that service were provided by a consultancy, it would have been beyond most customers’ budgets.

Second, the use of AI means that Durable itself can scale out its services more easily, avoiding the problems of selling and distributing services to a fragmented customer base.

“Advances in software will allow us to start delivering a ton of value that even last year would only have been available to enterprise customers,” he said. “We can now provide an even better level of service to independent stores who previously couldn’t afford something like this. It’s a very long tail, but it’s a huge market opportunity. .”

Durable turned to OpenAI after gaining access thanks to Altman Capital, which led Durable’s seed round.

“OpenAi has been a great partner from day one,” Clift said. Given the trajectory of OpenAI, which is reportedly working to close a new funding round with a valuation of more than $80 billion, the startup is probably one to watch as it is a close partner with ties to the CEO. right.

“One of the ideas I’m most interested in right now is how we can leverage AI to help founders build products from scratch that are 10x better than anything that exists today. in a space that helps you do it cheaper, faster and more accurately,” Jack Altman told me. “When I met James, I was not only very impressed with him as a founder, but also excited about the potential of what this product could do for entrepreneurs and small business owners. Since our initial investment. , seeing how well he and the team have done only increases my expectations for what Durable will be like.”

“At Spark, we have always pursued founders who challenge the status quo. James and the Durable team are not only doing this uniquely, but also helping entrepreneurs do the same with a frictionless user experience powered by AI. We are also creating a global platform for ,” said Natalie Sandman, general partner at Spark Capital. statement.

Source: techcrunch.com