Author Archives: 49954

A Self-Riding Electric Motorbike with AI Features

Reading Time: 2 minutes

The Motoroid 2, a revolutionary self-riding electric motorcycle from Yamaha, does away with conventional controls like handlebars. This version improves upon the Motoroid concept from 2017, which has been developed into a working prototype. By using gyroscopes and AI-powered image recognition technologies, the bike can ride itself without a rider while maintaining balance and navigating highways.

“Motoroid 2 is a vehicle for personal mobility that can recognise its owner, get up off its kickstand and move alongside its rider,” the company said.

“[It] has a distinctly lifelike feel when somebody is riding on its back and has a presence more like a lifetime companion.”

“Motorcycles will never ride autonomously; it doesn’t make sense,” said Dr Markus Schramm, head, BMW Motorrad. And rightly so. Motorcycles are inherently unstable vehicles with a shifting centre of gravity.

Motorcycling is bodily-kinesthetic. Depending on the type of motorcycle, the rider’s body position and technique need to change to get a handle on the machine. And with experience, motorcyclists develop muscle memory and intuition to make split-second decisions.

Second, motorcycles have low G-force as opposed to aircraft or cars. In the case of the former, we don’t have a lot of formalised literature or data on the bike-rider dynamic in the face of G force. To wit, G’s on a motorcycle is a bit more complicated.

Third, motorcycles are weight-sensitive. Considering how energy-inefficient today’s AI chips are compared to human brains, you’d have to pack huge batteries to endow the chip with enough “intuition” to ride a motorcycle, leading to a massive weight disadvantage.

Back in 2018, BMW developed a self-driving motorcycle with the ability to self-balance – accelerate, lean, and stop. However, it requires a human operator. The bike takes commands from the human operator via the antenna at the back. At the time, BMW said it had no plans to commercialize the project.

Key points:

 1 Motorcycles require constant shifting body positioning and technique based on conditions, something current AI technology cannot adequately replicate. Human riders develop muscle memory and intuition for split-second reflexes.

 2 Motorcycles experience more variable G-forces than cars or planes. There is insufficient data and models capturing the nuances of bike-rider dynamics in these situations for an AI system to safely control a motorcycle.

 3 Weight sensitivity limits how much battery power can be added to enable autonomous functionality without compromising motorcycle handling and efficiency. Existing AI systems are far too energy inefficient compared to human cognition.

While some limited self-balancing functionality is possible, as BMW demonstrated on a test model, removing human operational control poses too many unsolved stability, dynamics, and energy efficiency challenges. Ultimately motorcycling intrinsically depends on human bodily movement, instincts, and reactions. An autonomous motorcycle thus remains implausible. Companies would be better served focusing innovation on rider-assistance features rather than eliminating the human element central to motorcycling.

Sources:

Here are couple of videos concerning topic ‘motorcycles and ai’ if you are interested:)

https://fb.watch/p3NE24gt7y/

https://spectrum.ieee.org/ghostrider-the-self-driving-motorbike-that-launched-anthony-levandowski

https://www.wired.com/2016/08/get-know-aboard-self-driving-motorcycle/

EU Lawmakers Pave the Way for Strict AI Regulation

Reading Time: 3 minutes

After a three-day long negotiation, the Council presidency and the European Parliament’s negotiators have come to a tentative agreement on the proposal for harmonized rules on artificial intelligence (AI), known as the artificial intelligence act. The proposed regulation aims to ensure that AI systems used in the EU and placed on the European market are safe and respect fundamental rights and EU values. This significant proposal also intends to promote investment and innovation in AI within Europe.

EU agrees landmark rules on artificial intelligence

“This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.” — Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence

The AI act is a flagship legislative initiative with the potential to foster the development and uptake of safe and trustworthy AI across the EU’s single market by both private and public actors. The main idea is to regulate AI based on the latter’s capacity to cause harm to society following a ‘risk-based’ approach: the higher the risk, the stricter the rules. As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions, just as the GDPR has done, thus promoting the European approach to tech regulation in the world stage.


Among the new rules, legislators agreed to strict restrictions on the use of facial recognition technology except for narrowly defined law enforcement exceptions.

Considering the specificities of law enforcement authorities and the need to preserve their ability to use AI in their vital work, several changes to the Commission proposal were agreed relating to the use of AI systems for law enforcement purposes. Subject to appropriate safeguards, these changes are meant to reflect the need to respect the confidentiality of sensitive operational data in relation to their activities. For example, an emergency procedure was introduced allowing law enforcement agencies to deploy a high-risk AI tool that has not passed the conformity assessment procedure in case of urgency. However, a specific mechanism has been also introduced to ensure that fundamental rights will be sufficiently protected against any potential misuses of AI systems.

Moreover, as regards the use of real-time remote biometric identification systems in publicly accessible spaces, the provisional agreement clarifies the objectives where such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be exceptionally allowed to use such systems. The compromise agreement provides for additional safeguards and limits these exceptions to cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.

The legislation also includes bans on the use of AI for “social scoring” — using metrics to establish how upstanding someone is — and AI systems that “manipulate human behaviour to circumvent their free will”. The use of AI to exploit those vulnerable because of their age, disability or economic situation is also banned.


Some tech groups were not pleased. Cecilia Bonefeld-Dahl, director-general for DigitalEurope, which represents the continent’s technology sector, said: “We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head. “The new requirements — on top of other sweeping new laws like the Data Act — will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers.”

In conclusion, the EU’s Artificial Intelligence Act is a groundbreaking step towards harmonizing AI regulations within the region. The act follows a ‘risk-based’ approach and aims to set a global standard for AI governance. Key provisions address facial recognition, law enforcement needs, and ethical considerations. However, some tech groups have raised concerns regarding potential resource burdens. Further discussions may be necessary to balance regulation and innovation in the evolving tech landscape.

Sources:

Additional info:

https://www.eeas.europa.eu/delegations/australia/world%E2%80%99s-first-ai-law-eu-announces-provisional-agreement-ai-act_en?s=163

https://www.reuters.com/technology/eu-lawmakers-committee-reaches-deal-artificial-intelligence-act-2023-04-27/

https://www.bloomberg.com/news/articles/2023-06-14/eu-lawmakers-vote-to-ban-remote-face-scanning-in-public

https://apnews.com/article/ai-act-europe-regulation-59466a4d8fd3597b04542ef25831322c

https://www.technologyreview.com/2023/12/11/1084942/five-things-you-need-to-know-about-the-eus-new-ai-act/

Disney’s Dive into Real-Time Viewer Feedback: AI Revolutionizing Film Engagement

Reading Time: 2 minutes

Disney has used artificial intelligence (AI) to track the emotions of moviegoers in real-time. This new approach goes beyond traditional surveys and makes audience reactions a live part of the cinematic experience. It’s a pivotal moment in the entertainment industry that connects with audiences on a profound level. The audience’s emotional response is of utmost importance, and this technology allows Disney to understand them better.

Disney Research presented factorized variational autoencoders (FVAEs) at a recent conference in July. The revolutionary technique analyzes facial expressions using deep learning to decipher intricate audience reactions.

The FVAEs system has undergone extensive training, adeptly observing hundreds of faces within a dimly lit theater to meticulously monitor their responses. Whether it’s detecting smiles, tears, expressions of boredom, or even moments of slumber, the system’s precision is unparalleled.

Notably, the FVAEs, after a mere 10 minutes of analysis, boasts the remarkable capability to predict an audience member’s subsequent reactions throughout the remainder of the film. This innovation underscores Disney’s commitment to understanding and enhancing the viewer experience through cutting-edge technology.

During rigorous testing, Disney Research generated a staggering 16 million data points sourced from the facial expressions of 3,179 viewers. The sheer volume of data collected by the FVAEs surpasses the processing capacity of human intelligence, offering an unprecedented depth of insights into audience sentiment.

To conduct their research, Disney’s team utilized a state-of-the-art 400-seat theater equipped with four infrared cameras. Over 150 screenings of popular movies such as “The Jungle Book,” “Big Hero 6,” “Star Wars: The Force Awakens,” and “Zootopia” were meticulously analyzed, resulting in the accumulation of a vast dataset comprising 16 million facial landmarks from 3,179 audience members.

Disney utilized a rich dataset that was fed into a neural network to identify patterns and correlations between facial expressions and audience reactions. This powerful combination of AI technology and audience engagement has the potential to revolutionize the way films are crafted and experienced, placing Disney at the forefront of a new era in cinematic storytelling.

Disney’s breakthrough in combining AI and filmmaking is a game-changer for the industry. This innovative approach opens up new avenues for customizing storytelling methods, improving audience interaction, and shaping the future of cinematic art. The journey of AI-driven, real-time viewer feedback is not just a technological feat but also a significant milestone in the evolution of the cinematic landscape.

Sources:

Additional information:

https://aitoolsmasters.com/ai-disney/

https://medium.com/@analyticsemergingindia/top-10-use-cases-of-data-analytics-in-film-industry-db20fda72cf1

https://appinventiv.com/blog/ai-in-media-and-entertainment/

https://www.toolify.ai/ai-news/unbelievable-did-artificial-intelligence-pen-disneys-wish-499013

The Rise of AI Therapist Eliza

Reading Time: 3 minutes

Introduction

In recent years, there have been significant advancements in the field of mental health, and artificial intelligence (AI) has played a crucial role in driving this progress. One of the most prominent examples of AI-powered mental health technology is Eliza AI, which offers accessible and non-judgmental mental health assistance to its users. However, as Eliza AI becomes increasingly popular, questions arise about its relationship with human mental health professionals. In this post, we explore the intricate dynamics between AI therapists like Eliza and their human counterparts with confidence.

What is ELIZA and what is the purpose of it?

ELIZA, is one of the earliest examples of a computer program designed to simulate human-like conversation. It was created in the mid-1960s by Joseph Weizenbaum, a computer scientist at the Massachusetts Institute of Technology (MIT). Eliza was primarily developed to explore the possibilities of natural language processing and human-computer interaction.

Eliza was designed to replicate the techniques used by a Rogerian psychotherapist, a method pioneered by psychologist Carl Rogers. This therapeutic approach involves actively listening to the client and prompting them to delve deeper into their thoughts and feelings. By engaging users in text-based conversations and responding with empathy and comprehension, Eliza aimed to emulate this approach.

How does ELIZA work?

Generally, you enter a sentence into ELIZA and the ELIZA program will produce a new sentence in response.

Example of ELIZA ELIZA, a chatbot was designed by Joseph Weizenbaum to... |  Download Scientific Diagram

This is somewhat akin to today’s generative AI. You enter a prompt into a generative AI app such as ChatGPT and then ChatGPT generates a response to you. A notable difference is that ELIZA is conventionally devised to only take in a single sentence at a time and produce only a single sentence as output at a time

In the case of ChatGPT, your prompt can be very convoluted and can be many sentences in length, going into the size of many paragraphs. The same occurs with the output from ChatGPT in that it can generate for you many sentences or many paragraphs of output.

Scripts had to be written by people. Those scripts were fed into ELIZA. Whatever you saw ELIZA doing was driven by the human-devised script. It was the human that brought to the table a script that made ELIZA appear to be exhibiting intelligence.

The better the script that is fed into ELIZA, the better it will perform. Thus, the idea was that people might make more and more elaborate scripts that could be run in ELIZA, and ergo ELIZA would seem to be getting better and better.

What is the ELIZA effect?

From a psychological perspective, the Eliza effect is essentially a form of cognitive dissonance, where a user’s awareness of a computer’s programming limitations does not jibe with their behavior with and perception of that computer’s outputs. Because a machine is mimicking human intelligence, a person believes it is intelligent.

Our propensity to anthropomorphize does not begin and end at computers. Under certain circumstances, we humans attribute human characteristics to all kinds of things, from animals to plants to cars. It’s simply a way for us to relate to a particular thing, according to Colin Allen, a professor at the University of Pittsburgh who focuses on the cognitive abilities of both animals and machines. And a quick survey of the way many AI systems are designed and packaged today makes it clear how this tendency has spread to our relationship with technology.

“Rather than just relying on people’s tendency to do this, [technology] is being designed and presented in ways that encourage us,” he told Built In, adding that it’s “all part of keeping our attention” in the midst of everything else. “You want people to feel like they’re in some sort of interesting interaction with this thing.”

Think about it: Companies will design robots to be cute and childlike in an effort to make people more comfortable around them. Groundbreaking creations like the Tesla robot and Hanson Robotics’ Sophia are built to look like humans, while others are designed to act like humans. And the vast majority of AI voice assistants on the market today have human names like Alexa and Cortana. Watson, the supercomputer created by IBM that won a game of Jeopardy! in 2011, was named after the company’s founder Thomas J. Watson. Even ELIZA itself was named after Eliza Doolittle, the protagonist in George Bernard Shaw’s play Pygmalion.

Can it possibly become a threat for therapists?

AI therapy tools such as Eliza are not to be feared as competitors to human therapists. Rather, they serve as valuable resources that can complement and expand mental health care services. By utilizing the power of AI, we can extend the reach of therapy to more people, allowing for greater access to care and improved mental health outcomes.

Here is a video telling more about ELIZA in details if you are interested:)

Sources:

  • https://www.forbes.com/sites/lanceeliot/2023/11/05/legendary-eliza-and-parry-go-head-to-head-with-chatgpt-in-a-revealing-battle-of-using-generative-ai-for-mental-health/?ss=ai&sh=77dea54c186b
  • https://builtin.com/artificial-intelligence/eliza-effect

Additional info:

https://abilitynet.org.uk/news-blogs/eliza-ellie-evolution-ai-therapist

https://medium.com/nerd-for-tech/eliza-the-chatbot-who-revolutionised-human-machine-interaction-an-introduction-582a7581f91c

https://www.humanprotocol.org/blog/what-is-the-eliza-effect-or-the-art-of-falling-in-love-with-an-ai

https://www.theswaddle.com/inadequate-mental-healthcare-has-given-rise-to-ai-therapy-whats-the-harm

https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

Green Tech and Sustainability in the Tech Industry

Reading Time: 2 minutes

The tech industry has long been a driver of innovation, but in recent years, it has also become a major player in the global sustainability movement. As the world faces pressing environmental challenges, from climate change to resource depletion, technology companies are embracing green tech and sustainability practices. These efforts are not only good for the planet but also make economic sense. In this post, I’ll discuss five ‘green’ projects with potential.

1. ‘Green Hydrogen’

“Green hydrogen” is a hot topic and is quickly becoming a major component of the world’s clean energy mix. The challenges to date have been the high cost and serious safety issues associated with producing hydrogen energy. Advances in electrolyzer and fuel cell technology are narrowing the gap on the cost issue. Plus, new sensor technology is making it safer to produce, transport and use hydrogen energy in either combustion or electrochemical processes.

2. Green-Focused Data Collection

Technology that collects data on energy consumption and carbon emissions has the potential to make a huge impact, as according to the Environmental Protection Agency, electricity accounts for the second-largest share of greenhouse gas emissions in the U.S. The bottom line is, data will be paramount when it comes to fighting climate change, and technology that collects data needs to be embraced.

3. Mobile Apps For Conservation Management

To achieve global “30 by 30” conservation goals—conserving 30% of the earth’s land and water by 2030—the New Zealand Department of Conservation is using mobile apps to more effectively manage their conservation process. Simple but effective tools such as these are critical for governments to consider as we all work together to achieve our global conservation goals.

4. ‘Recycled’ Code

While technology is solving significant challenges in many aspects of protecting the environment, we have a strong opportunity closer to home. Technology reuse is a great opportunity to save hundreds of thousands of hours of compute energy. There are millions of libraries of code available to the public that can be reused to build digital solutions while reducing IT energy demands.

5. Waste-To-Energy Technology

Several companies are already developing waste-treatment solutions that generate energy in the form of steam, hot water or electricity that can later be used for internal processes. These technologies solve two problems at once, so we can predict their development at twice the speed.

In the future, choosing green technology won’t just be a responsible decision, but a practical one as well. Reduced energy costs, improved air quality, and the conservation of Earth’s finite resources are compelling reasons to switch to green tech. The future of green technology promises innovation, sustainability, and global cooperation. By adopting green tech solutions, we can take significant steps toward a more sustainable and harmonious coexistence with the planet.

Sources used:

And here are some additional articles to read:

https://www.ironhack.com/gb/blog/sustainability-in-tech-how-green-practices-are-shaping-the-industry-in-2024

https://www.netguru.com/blog/what-is-greentech

https://www.linkedin.com/pulse/green-tech-revolution-sustainability-technological-innovation-mjbmc

https://www.apptension.com/blog-posts/green-tech

https://medium.com/@rekart/green-innovations-technologies-shaping-a-sustainable-future-1c8005dae1fc