Tag Archives: robotics

Necrobotics: The Fascinating Field of Using Dead Organisms for Robotics

Reading Time: 2 minutes

An exciting new field called necrobotics investigates the possibility of using dead organisms, or biotic materials, as robotic components. The term itself brings up images of both mystery and fear. Even though the idea might seem absurd, it has the potential to produce creative and long-lasting solutions for a range of uses, including archaeology and healthcare.

Harnessing Nature’s Ingenuity

Many organisms have amazing mechanisms that allow them to carry out difficult tasks that nature provided them with. These natural designs, which range from the complex hydraulic systems of spiders to the complex joint structures of insects, provide important insights for the creation of more effective and versatile robots. By integrating biotic materials into robotic systems, necrobotics seeks to capitalize on this natural inventiveness.

Spiders: A Case Study in Necrobotics

Research on spiders has shown great promise for the field of necrobotics. Their hydraulic systems, which consist of valves that can be controlled and chambers filled with fluid, enable them to produce precise movements with amazing strength and dexterity. Using the legs of dead spiders, a Rice University research team successfully created a robotic gripper in 2022. This necrobotic gripper showed remarkable grasping abilities; it could lift weights up to 130% of its own mass and handle small objects.

Beyond Spiders: A Wider Realm of Biotic Materials

Necrobotics is not just about spiders; it includes a wider range of biotic materials, such as insect wings, fish scales, and bird feathers. Every material has special qualities that can be used for certain purposes. For example, insect wings could be used for movement and propulsion, and bird feathers could be used for lightweight, flexible structures.

Potential Applications of Necrobotics

Necrorobotics has a wide range of possible uses in different industries. Necrorobotics has the potential to revolutionize the medical field by creating minimally invasive surgical instruments, precise drug delivery microrobots, and prosthetic limbs that emulate human muscle movement.
Necrorobotics may be used in archaeology to help with the careful excavation of delicate artifacts, enabling archaeologists to handle and preserve priceless historical treasures with little harm. Necrorobotics could be used in environmental remediation to create robots that can clean up and navigate through contaminated areas, thereby eliminating dangerous materials from the environment.

Ethical Considerations and Sustainability

Despite the enormous potential of necrobotics, ethical issues must be addressed and sustainable methods must be followed. Careful regulation of the source of biotic materials used in robotics is necessary to reduce the negative effects on natural ecosystems. To further minimize waste and the environmental impact of necrobotic components, efforts should be made to maximize their longevity and reusability.

Conclusion: A Glimpse into the Future of Robotics

Necrobotics is a new paradigm in robotics that offers a fresh way to create intelligent, effective, and sustainable machines. Necrorobotics holds the potential to transform numerous industries and influence the direction of technology by utilizing the inventiveness of nature. To ensure a responsible and sustainable future for necrobotics, it is crucial to carefully balance scientific innovation with ethical considerations and environmental stewardship as this field continues to evolve.


Tagged , ,

Embracing the Robotic Revolution: The Convergence of AI and Robotics is Within Reach.

Reading Time: 3 minutes

Artificial Intelligence (AI) has witnessed a transformative phase with the introduction of large language models (LLMs) like ChatGPT and Bard. These models have revolutionized AI for language processing and problem-solving. However, the next frontier for AI lies in robotics. Building AI-powered robots that can learn to interact with the physical world has the potential to enhance various industries, from logistics and manufacturing to healthcare and agriculture. In this article, we will explore the parallels between the success of LLMs in language processing and the upcoming era of AI-powered robotics.

Building on the Success of GPT.

To understand how to build the next generation of robotics using the principles that made LLMs successful, we need to look at the core pillars of their achievements.

  1. Foundation Model Approach: The concept of foundation models, as seen in GPT, focuses on training a single AI model on a vast and diverse dataset. Unlike previous approaches where specific AI models were created for distinct tasks, a foundation model can be universally utilized. This general model performs well across multiple tasks and leverages learnings from various domains, improving its performance overall.
  2. Training on a Large Proprietary and High-Quality Dataset: The success of LLMs can be attributed to training them on large and diverse datasets. In the case of GPT, the models were trained on a wide range of data sources, including books, news articles, social media posts, and more. The high-quality dataset, informed by user preferences and helpful answers, has been instrumental in achieving unprecedented performance.
  3. Role of Reinforcement Learning (RL): Reinforcement learning, combined with human feedback, plays a crucial role in fine-tuning and aligning the AI model’s responses with human preferences. GPT utilizes reinforcement learning from human feedback (RLHF) to enhance its capabilities. This approach allows the model to move towards its goal through trial and error, achieving human-level capabilities through learning from human feedback.

Applying GPT Principles to Robotics

The foundation model approach, training on a large proprietary dataset, and incorporating reinforcement learning have paved the way for the development of AI-powered robots. Just as GPT models can process text and images, robots equipped with foundation models can understand their physical surroundings, make informed decisions, and adapt their actions to changing circumstances.

  • Revamping Robotics: Exploring New Frontiers with Advanced Techniques:Similar to language models, applying the foundation model approach to robotics enables the development of one AI model that works across multiple tasks in the physical world. This shift allows the AI to respond better to edge-case scenarios and achieve human-level autonomy. Training on a diverse dataset collected from real-world interactions is essential for teaching robots how to navigate and operate effectively.
  • Harnessing the Power of Training on Etensive, Exclisive, and High-Quality Datasets: Unlike language or image processing, there is no preexisting dataset that represents how robots should interact with the physical world. Consequently, training robots to learn from real-world physical interactions is difficult, but crucial. Deploying a fleet of robots in production environments becomes necessary to gather the data needed for training comprehensive robotics models.
  • Empowering Robots throught the Role of Reinforcement Learning: In robotics, as in language processing, pure supervised learning is insufficient. Robotic control and manipulation require reinforcement learning (RL) to seek progress toward goals without a unique correct answer. Deep reinforcement learning (deep RL) enables robots to adapt, learn, and improve their skills as they encounter new scenarios and challenges.

The Future of AI Robotics

The combination of these principles and advancements in AI and robotics sets the stage for a revolution in the field. The growth trajectory of robotic foundation models is rapidly accelerating. Already, applications such as precise object manipulation in real-world production environments are being deployed commercially. In the coming years, we can expect to see an exponential increase in commercially viable robotic applications across various industries.


The GPT moment for AI robotics is on the horizon. By leveraging the foundation model approach, training on large datasets, and incorporating reinforcement learning, AI-powered robots are poised to transform industries by enhancing repetitive tasks and adapting to dynamic physical environments. As we enter this new era of AI robotics, the possibilities for automation and efficiencies in the physical world are vast and promising.

Links worth visiting:

Role of Artificial Intelligence and Machine Learning in Robotics

AI in Robotics: 6 Groundbreaking Applications


The article was written using Copy.ai and based on a TechCrunch article

Tagged , ,

Underwater Robotics

Reading Time: 3 minutes

Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs) are two types of underwater robotic systems that play an increasingly significant role in ocean exploration, scientific research, and various industrial operations. Although both systems are designed to operate underwater, they differ in terms of how they are controlled and the tasks they are capable of performing. Collectively, both AUVs and ROVs are categorized as Unmanned Underwater Vehicles (UUVs).

Two autonomous underwater vehicles resting on land
Project Wilton Iver AUVs, courtesy of our partner, SeeByte

An AUV is an autonomous underwater vehicle that often (but not always) operates independently of direct human control. It is equipped with various sensors, instruments, and navigation systems that allow it to perform a range of tasks, including mapping the ocean floor, collecting environmental data, and conducting scientific surveys at sea. Ideally, AUVs are programmed to perform specific missions and have the ability to make decisions based on real-time data, making them a great candidate for conducting long-term, repetitive missions. However, due to the lack of remote off-grid power solutions, big-data transmissions, and edge-compute capabilities, the current generation of AUVs still have a limited operational reach and require interventions of human operators.

A remotely operated vehicle inspecting underwater structures
Subsea 7’s AIV performing a mid-water riser inspection using sonar, courtesy of our partner, SeeByte

Remotely Operated Vehicles (ROVs), on the other hand, are underwater robots that are often controlled by a human operator. Like their AUV counterparts, ROVs are also equipped with cameras, lights, and various sensors that allow them to perform tasks (such as inspections, maintenance, and repair on underwater structures and vessels). ROVs can also be equipped with sampling tools and other scientific instruments, making them useful for conducting research missions. ROVs play a very prominent role in deep-sea scientific missions for studying benthic ecosystems, such as during the EV Nautilus cruises. (More on this later in another post.)
The main advantage of ROVs is that they allow for direct human control, which can be especially useful in situations where real-time decision-making is required. This makes ROVs ideal for missions that require a high degree of precision and control, such as the inspection of underwater pipelines, the repair of underwater communication cables, or the removal of debris from shipwrecks. Additionally, ROVs can be equipped with manipulator arms and other tools, making them capable of performing tasks that are (currently) not possible with AUVs.

Despite the differences between AUVs and ROVs, both systems play an important role in a variety of industries. In the oil and gas industry, for example, both types of underwater robots are used for exploration and production, as well as for monitoring and maintenance of underwater pipelines and platforms. In scientific research, both AUVs and ROVs are used for oceanographic surveys, as well as for monitoring ocean ecosystems and the effects of climate change.

As the blue tech industry continues to advance, it is likely that UUVs will play an even greater role in ocean exploration, scientific research, and industrial operations in the years to come, making them a pivotal component of the rapidly growing blue economy.

As for me the article is a clear and concise explanation of the differences between AUVs and ROVs, two types of underwater robotic systems that are widely used in the blue economy. It provides a brief overview of the main features, advantages, and disadvantages of each system, as well as some examples of how they are used in various industries and applications. The article also uses relevant images and links to illustrate the concepts and provide more information for the interested readers.

However, the article could also be improved in some ways. For instance, it could provide more details on the current challenges and limitations of AUVs and ROVs, such as the technical, operational, and regulatory issues that affect their performance and deployment. It could also discuss some of the emerging trends and innovations in the field of underwater robotics, such as the development of hybrid systems that combine the features of both AUVs and ROVs, or the use of artificial intelligence and machine learning to enhance the autonomy and capabilities of UUVs. It could also address some of the ethical and social implications of using UUVs in the ocean, such as the potential impacts on the marine environment and biodiversity, or the legal and moral responsibilities of the operators and users of UUVs.

Overall, the article is a good introduction to the topic of underwater robotics, but it could also go deeper and more critical in its analysis and discussion.

Resources: Underwater Robotics. Autonomous Underwater Vehicles, AUVs, ROVs | Ocean Motion Tech Blog (medium.com)


Tagged , ,

Telesurgery. Worthwhile or dangerous?

Reading Time: 2 minutes

Would you ever believe that surgeons will be able to operate on a patient even though they are 400 km away? That is exactly what telesurgery can allow. It is an innovative surgical tool that connects patients and surgeons who are geographically distant. The surgeon observes the surgical area on the screen and uses a haptic arm to move the robotic arm during the operation.

On the one hand, there are many benefits of telesurgery in comparison to conventional surgical methods. First and foremost, telesurgery is an excellent solution for those who for some reason can not travel to get medical care. Not only financial constraints but also travel-related health issues can pose a problem for some people. Secondly, it enables surgery through smaller incisions and its robotic arms are able to reach hard-to-access areas in the body. It also eliminates a surgeon’s possible tremor resulting in improved surgical accuracy. Consequently, the risk of damaging surrounding structures, the risk of blood loss, and the risk of infection are alleviated. Aside from this, telesurgery gives surgeons from different centres an opportunity to collaborate and operate on a patient simultaneously. 

On the other hand, there are some issues in the field of telerobotic surgery. Firstly, a time lag is considered to be a major drawback while using telesurgery. It was determined that a time delay of more than 2 seconds can be a threat. Secondly, being operated on by a surgeon, a patient has never met face-to-face, can cause distrust and anxiety. And finally, a researcher at Obuda University in Budapest who studies space telesurgery, Tamas Haidegger, noted that despite having a master surgical plan, things can go wrong. For example, blood circulation can collapse, or there is an unforeseen reaction to certain drugs. That is why there is still a necessity to have a trained surgeon on-site. Nonetheless, he believes that soon robots will be augmented with artificial intelligence and will be able to go into autopilot mode. It would be a significant breakthrough in human history! 

Having considered all possible pros and cons of telesurgery, in my opinion, the technology is worth being widely embedded. I would agree that it can scare some people that a robot is performing an operation. However, in reality, surgeons are in full control of the machine at all times and the robot’s movements are far more precise. 





Tagged , ,

Samsung’s NEON digital avatars shouted as artificial humans

Reading Time: 4 minutesThe fact that Samsung will appear with its new project at CES 2020 has been loud for a long time. Everyone was wondering what artificial humans could be. And one thing is certain. After all the media noise around the project, everyone expected something completely different. Especially after prematurely disclosed material, which can be watched below.


What exactly is this project about?

NEON is the idea of Samsung researcher Pranav Mistry. The project emerged out of STAR Labs – Samsung Technology and Advanced Research Labs – and is funded by Samsung, but it’s not actually a Samsung company.

The NEON project is realistic human avatars that are computationally generated and can interact with people in real-time. At this point, each NEON is created from footage of an actual person that is fed into a machine-learning model. A Neon is meant to mimic real human appearance and emotions with its own personality and aptitude to behave like humans. Avatars can also remember and learn.

According to Pranav Mistry, NEON isn’t meant to replace Samsung’s digital assistant Bixby. What is more, it won’t be implemented in Samsung products and NEON operates independently.


Examples of the NEON’s application

Each NEON avatar can be customized for different tasks and is able to respond to queries with the latency of less than a few milliseconds. They’re not intended to be just visual skins for AI assistants but put to more varying uses instead. If we are to believe STAR Labs CEO Pranav Mistry, in the near future everyone will be able to license or subscribe to a NEON. The roles can be different: a service representative, a financial advisor, a healthcare provider, or a concierge. The founder also assures that NEONs will work as TV anchors, spokespeople, or movie actors. They can simply be companions and friends if people only would want it.

The first wave of Neons are modeled after real people.
Source: https://www.neon.life/


NEONs will work as TV anchors, spokespeople, or movie actors.
Source: https://www.neon.life/


What technology is behind it?

There are two main technologies on which NEON is based. The first is Core R3, which stands for reality, real-time and responsiveness. Core R3 is a kind of the graphics engine that powers avatars natural movements, expressions and speech. The second technology is Spectra, which is responsible for implementing artificial intelligence solutions. By this, the creator means intelligence, learning, emotions, and memory. Spectra is not ready for launch yet, but the company says it will present the technology later this year. At the moment it is still being developed.

Neon’s Core R3 graphics engine demonstrated at CES 2020.
Source: https://www.cnet.com/news/samsung-neon-artificial-humans-are-confusing-everyone-we-set-record-straight/


What about the uncanny valley?

When NEON avatars can become real comrades in everyday life, one should ask oneself whether the fact that they are so realistic is not a problem. This is specifically about the phenomenon of the uncanny valley, the scientific hypothesis telling that a robot that looks or functions like a human being causes the observers to feel unpleasant or even disgusting. When some people are wondering how STAR Labs has worked out every detail, others feel at least uncomfortable.


Why is everyone disappointed?

NEON is like a new kind of life, There are millions of species on our planet and we hope to add one more – this is what we heard from STAR Labs CEO Pranav Mistry before the CES 2020 presentation. It is no wonder that nobody got into awe when it turned out that NEON is just a highly detailed digital avatar. In addition, the demo presented at the show was fully controlled by people from STAR Labs. All the media hype made everyone wait impatiently for the show to finally find out that NEON still has a lot of work to do on its business.

It remains to not believe the haters because NEON avatars look really good and the potential of the project is certainly there. Thus, the final version of the STAR Labs venture has not come and we shouldn’t believe all the media reports. It will soon be clear whether a company can combine two ambitious technologies – the avatars and the AI – together.


Do you see a practical application of Samsung’s NEON in the near future? Would you feel comfortable if your teacher wasn’t a real person but Samsung’s NEON?



[1] https://www.theverge.com/2020/1/7/21051390/samsung-artificial-human-neon-digital-avatar-project-star-labs

[2] https://www.theverge.com/2020/1/8/21056424/neon-ceo-artificial-humans-samsung-ai-ces-2020

[3] https://www.engadget.com/2020/01/05/samsung-neon-artificial-human-teaser/

[4] https://www.cnbc.com/2020/01/06/samsung-neon-artificial-human-announced-at-ces-2020.html

[5] https://www.cnet.com/news/samsung-neon-project-finally-unveiled-humanoid-ai-chatbot-artificial-humans/

[6] https://www.cnet.com/news/samsung-neon-heres-when-well-get-details-on-the-mysterious-ai/

[7] https://economictimes.indiatimes.com/magazines/panache/meet-neon-samsungs-new-ai-powered-robot-which-can-converse-sympathise/articleshow/73135240.cms

[8] https://www.livemint.com/companies/people/we-ll-live-in-a-world-where-machines-become-humane-pranav-mistry-11577124133419.html

[9] https://mashable.com/article/samsung-star-labs-neon-ces/?europe=true

[10] https://www.wired.co.uk/article/samsung-neon-digital-avatars

Tagged , , , , , , , , , ,

Scaled Robotics – an innovator in the construction industry

Reading Time: 3 minutesIt has been exactly a month since we knew the winner of the latest edition of TechCrunch Disrupt Berlin 2019. Congratulations to the newest Startup Battlefield winner, Scaled Robotics, who designed a robot that can produce 3D progress maps of construction sites in minutes.

Scaled Robotics wins the Startup Battlefield
Source: https://techcrunch.com/2019/12/12/scaled-robotics-wins-startup-battlefield-at-disrupt-berlin-2019/


How does Scaled Robotics work?

The startup has created a robot that trundles autonomously around construction sites, using a 360-degree camera and custom lidar system to systematically document its surroundings. All this information goes into a software backend where the supervisors can check things like which pieces are in place on which floor, whether they have been placed within the required tolerances, or if there are safety issues like too much detritus on the ground in work areas. The data is assembled automatically but the robot can be either autonomous or manually controlled.


Why construction companies need Scaled Robotics?

Construction is one of the world’s largest but also most inefficient and wasteful industries. There are estimates that nearly 20% of every construction project is rework. The problem of waste and rework is so widespread that the industry on average operates on a 1-2% margin. The root of this problem stems from the fact that the construction industry is still relying on tools and processes developed over 100 years ago to tackle the problems of today. The robot can make its rounds faster than a couple of humans with measuring tapes and clipboards. Someone equipped with a stationary laser ranging device that they carry from room to room just works too slowly. Using outdated data is one of the main problems for developers. This is confirmed by the case that was carried out in one of the companies. One of the first times startup took data on a site, the client was completely convinced everything they’d done was perfect. Scaled Robotics put the data in front of them and they found out there was a structural wall just missing, and it had been missing for 4 weeks. Thanks to Scaled Robotics’ technology such situations do not take place.

Simultaneous location and ranging (SLAM) tech
Source: https://techcrunch.com/2019/12/11/scaled-robotics-keeps-an-autonomous-eye-on-busy-construction-sites/


Technologies that support people’s work

There is no doubt that the entire competitive advantage of Scaled Robotics lies in innovative technology. An advantage of simultaneous location and ranging (SLAM) tech is that it measures from multiple points of view over time, building a highly accurate and rich model of the environment. Automated Construction Verification system with scans from traditional laser scanners, can verify the quality of the build providing high precision information to localize mistakes and prevent costly errors. What is more, Automated Progress Monitoring helps track the progress of the construction project and provides actionable information for site managers to prevent costly errors. By comparing this to a source CAD model of the building, it can paint a very precise picture of the progress being made. Scaled Robotics also built a special computer vision model that’s suited to the task of sorting obstructions from the constructions and identifying everything in between.

What Scaled Robotics did is that they rethought the entire construction process. Their mission is to modernize construction with Robotics and Artificial Intelligence, thereby creating a manufacturing process that is lean, efficient and cost-effective.

Does Scaled Robotics have a chance to revolutionize the construction industry on a global scale?



[1] https://www.scaledrobotics.com/

[2] https://techcrunch.com/2019/12/11/scaled-robotics-keeps-an-autonomous-eye-on-busy-construction-sites/

[3] https://techcrunch.com/2019/08/02/digitizing-construction-sites-with-scaled-robotics/

[4] https://techcrunch.com/2019/12/12/scaled-robotics-wins-startup-battlefield-at-disrupt-berlin-2019/

[5] https://pitchbook.com/profiles/company/279687-25

[6] https://angel.co/company/scaled-robotics

[7] https://www.theburnin.com/startups/scaled-robotics-wins-techcrunch-disrupt-battlefield-3d-construction-site-progress-maps-2019-12/

Tagged , , , , ,