Ameca is a robot created by a British company – ENGINEERED ARTS. As you can see on the video above, it amazes withhyper-realistic facial expressions. It is being described as a world’s most advanced human shaped robot.
WHAT IS AMECA DESIGNED FOR?
Will Jackson, founder of Engineered Arts, said that “the reason for making a robot that looks like a person is to interact with people”. Ameca has a grey coloured skin with deliberately gender and race neutral characteristic. Although it can perfectly mimic the human’s reactions and moves, it cannot walk yet. The engineers from the team say that this ability is currently under research and we can expect it very soon. Ameca has been designed as a robot platform—customers who buy it can add AI and other software to give the robot desired abilities, which I think might be very promising for example as a use in a metaverse. Just imagine taking your metaverse character into a real world or sending a robot to a meeting across the world.
Will Jackson spoke to the press recently, telling them that the abilities of Ameca and the company’s prior robots are the result of over 15 years’ worth of research and development. He also said that the goal of the company has remained the same – to develop robots that are able to interact in human-like ways, with humans. As for now, Ameca is available for purchase or event rental through Engineered Arts website.
HOW THEY MADE AMECA SEEM SO REAL?
The appearance of the Ameca robot is based on 3D scans of real people. Thanks to that, they were able to imitate the human-like bone structure, skin surface and facial expression. Engineers also use high-precision sensors, cameras, depth sensors, LiDAR technology and microphones.
A compelling argument for a decentralized approach when using artificial intelligence systems for creative purposes. And also an attractive business – numerous project participants had the opportunity to make good money in the future. Botto generates thousands of images, but only the community of people supporting the project decides in which direction the “creator” should work and which works will go to the auction. Only owners of Botto cryptocurrency can vote.
Every week, Botto presents fifty art pieces to the community, who then vote on their favorite artwork
How Botto creates art?
Botto’s work begins by generating a line describing a new painting, a kind of technical task. The text is transmitted to the VQGAN neural network, which recognizes it and matches image fragments to words, and then combines them into one picture. It is sent for verification to another CLIP neural network, which determines the correspondence of the image to the words, makes corrections and sends everything back to VQGAN for revision.
When CLIP is satisfied with the result, Botto uses the GPT-3 natural language generator to create a poetic description of the painting. After that, the finished painting is sent to the evaluation of human critics – 300 different images per day. Based on the voting results, a certain number of works are selected, which receive an NFT token and are put up for auction.
How it’s work?
Botto’s philosophy of work is built in such a way that he constantly challenges critics while improving his skills. To be able to “compete” with AI, you need to pay for participation in a special cryptocurrency created exclusively for this project. The money that comes from the sale of paintings is used to buy it out and “dispose of”, so the amount of cryptocurrency is constantly decreasing, and the value is growing. And it gives members the opportunity to make money by selling their inventory, which weeds out random people.
As for me, it is not correct that they are engaged in artificially raising the prices of their coins. They burn part of their coins so that the price of the remaining ones would rise and over time a kind of deficiency was formed.
While the robot is only learning, then its actions are based only on the information received and interpreted in its own way over time it will be able to come up with something of its own, but still there will be no soul in it and most importantly it will be without UNIQUE HISTORY.
How many of us have heard or experienced waiting in lines to see a doctor, receiving a misdiagnosis, months of searching for the right specialist, and feeling powerless about it? In the case of rare diseases or neoplasms, the diagnostic process takes up to 7 years from the first symptoms to appear. Sometimes it is too late for a sick patient because only 1 in 8 patients is properly diagnosed and treated because from 7000 rare diseases only 10% of them have a treatment. Now the diagnostics process of rare diseases is complex, far from optimal and the problem has been neglected for many years even in developed countries.
Fortunately, the development of medicine and science allows for the improvement and cooperation of these two sectors. Even from an epidemiological point of view, a Google search engine that was not designed for it was able to identify the symptoms of COVID-19 before its infection surged and was recognized by doctors. According to Dr Elena Ivanina, a gastroenterologist at Lenox Hill Hospital:
“This is not the first time Google searches have been used to predict epidemics”
If a device not intended for the diagnosis of diseases can work wonders, you can only imagine what effects we will get with Saventic.
Saventic is a company dealing in the diagnosis of rare diseases based on artificial intelligence, it creates comprehensive solutions to support healthcare systems. Saventic offers two platforms based on SARAH which uses AI algorithms with different diagnostic approaches on two levels of use: first – Saventic Medical API for clinics, hospitals or professional medical units and second – Saventic Foundation which is a new idea, a platform for patients that offers the possibility of a private diagnosis of rare diseases.
Saventic Medical API is a B2B opportunity for healthcare providers to make correct diagnoses in a faster and easier way. Also improvement not only in the diagnosis but also in the treatment of less common diseases and in area of knowledge about their symptoms or course of illness based on the collected data. The Foundation is a B2C application for patients seeking diagnosis. This is the second way to reach patients with rare diseases, not only to analyze databases in hospitals, but also directly to patients. The Foundation is currently supporting patients with metabolic and blood diseases, solutions for other diseases are under development. Algorithms for Gaucher’s disease, Fabry’s disease and blood cancer are currently being commercialized.
To sum up everything that has been stated so far from year to year, organizations such as Saventic will improve the work of doctors and disease diagnosis, as well as improve treatment methods and knowledge of their symptoms. Such a modern medical solution is a novelty on the Polish and global market.
Engineering, physics – these fields of science can be named as BFF. Creators should even begin from the force of gravity law in order to make any mechanism work; each mech firstly has to fit some characteristics as form, consistency and its deformation capacity to proceed with. However, the equations solving can be computationally expensive, depends on material complexity.
MIT researchers decided to deeply focused on resolving and presented an Artificial Intelligence soft determining stress and strain of a material based on image recognition.
An algorithm was developed by Zhenze Yang (lead author and PhD student in the Department of Materials Science and Engineering), Chi-Hua Yu (former MIT postdoc) and Markus J. Buehler (Director of the Atomic and Molecular Mechanics Laboratory and Professor of Engineering at McAfee), providing the possibility to implement connect computer vision and material in a real-time.
As data researches used different materials with various “from soft to hard” consistency. Main Machine Learning model was based on GAN (generative adversarial network) matching dozen of images to the future system in order to get the general “understanding” and as an addition be able to visualize micro details and singularities like cracks and other deformities.
In order to understand the pressure exerted with certain conditions objects were interpreted in random geometrical figures.
The recent innovation will open many doors in resolving estimating risk issues; a significant guarantee of constructions stability increase and revealing the potential of AI and computer vision in perspective.
The company installed the cameras in February 2021, explaining that they were needed for safety.
Drivers of Amazon’s delivery service in the US must now sign a “biometric consent” form in order to continue working for the company , The Verge writes on March 24.
It involves agreeing to collect data from cameras installed in delivery vans. Drivers must agree to the use of “certain technology, including cameras” as a “condition of delivering packages for Amazon,”  according to Vice.
The drivers in question are those who rent Amazon vans under the Partner Service Delivery programme. According to Vice, about 75,000 drivers could be affected.
What kind of data the company will collect depends on what kind of equipment is installed in the vans, The Verge suggests. But the agreement implies a wide range of data to be collected. This includes cameras using facial recognition to confirm driver identity and connect to an account, according to Vice.
Among those collected, for example, is data on the van’s movement, including speed, acceleration, braking, turns and distance travelled, “potential traffic offences” such as speeding or unbuckling a seatbelt. As well as “potentially dangerous driving behaviour” – when the driver is distracted from the road or falling asleep.
The company says it installs the cameras for “safety” and “to improve delivery”. But some drivers have already refused to sign the agreement.
Amazon installed  artificial intelligence cameras in vans rented by drivers participating in the Partner Service Delivery programme in February. They have built-in software that can detect 16 different safety issues, including if drivers are distracted, speeding, braking sharply and more.
In early March, senators from five US states wrote to Amazon  saying that the use of surveillance cameras in delivery vans “raises important privacy and worker oversight issues that Amazon must respond to”.
In September 2020, human rights activists had already spoken out about the hard work at Amazon – for example, the company has an electronic employee monitoring system, there are cameras in warehouses and drivers’ locations are constantly recorded.
The ills of psychology and psychotherapy have been the same for years – poorly structured data, a multitude of schools and therapeutic methods, the effectiveness of which is poorly or almost completely unverifiable.
Will the algorithms be able to help the man who created the imperfect, flawed system?
According to a 2016 meta-analysis published in the Psychological Bulletin, which was conducted over the past 50 years, psychology has made no progress in predicting acts of suicide. Subsequent cross-sectional studies laboriously try to sift the grain from the chaff and at the same time show how much pseudoscience is in psychology. As an introduction, it is worth mentioning the famous psychologist who almost has the status of a star – Philipha Zimbardo, whose experiment also did not survive the test of time and turned out to be an ordinary fraud, which was discovered by a French documentary filmmaker.
Despite the crises that psychology is going through, there are also areas where attempts are made to apply algorithms and solutions bordering on the use of artificial intelligence and machine learning. Let’s take a look at the most interesting solutions that scientists have managed to create so far. We will omit historical algorithms, such as ELIZA, which is one of the first intelligent interlocutors-therapists, and focus on the subjective, more advanced three.
Say hi to Ellie, avatar therapist.
Despite the fact that this is an algorithm that is already old, despite the poor graphic interpretation of the therapist, it still makes a big impression. The algorithm analyzes our facial expressions, gestures, voice timbre or eye movement and conducts the conversation as in classical therapy. Studies have shown that veterans (who participated in the pilot) suffering from post-traumatic stress disorder were more likely to answer questions to a digital “human” than to a real therapist. The results achieved by the algorithm were appreciated by clinicians who were impressed by the effectiveness of the algorithm and the openness of patients during the conversation with the avatar.
A much more specialized approach is proposed by Spring Health, which creates algorithms that monitor only mental illnesses.
We are able to predict whether the condition of a given patient will improve after the selected therapy – says Adam Chekroud, co-founder of Spring Health
The Spring Health algorithms are used, for example, by employees of Amazon and Gap Inc. For now, however, this diagnosis consists in selecting appropriate drugs based on questionnaires filled in by the patients themselves.
This process is, of course, largely automated and can boast very high treatment effectiveness, but we are still a little short of artificial intelligence, which would be a full-fledged and effective therapist. For now, the effectiveness of the algorithms depends on the analysis of a sufficiently large set of data. The algorithm analyzes data on several hundred thousand patients, taking into account all their medical data, and on this basis creates a whole network of relationships between the type of patient and the therapy that turned out to be the most effective.
On the basis of such a network of dependencies, algorithms are already able to identify people with mental disorders long before their diagnosis by any human doctor.
So we can risk a statement that in the future machines will be responsible for healing not only our body, but also our mind. The only question is how distant this future is.
Sources: – American Psychological Association, After Decades of Research Science Is No Better Able to Predict Suicidal Behaviors. – https://bit.ly/3cszETI – https://bit.ly/3sy9ogi – https://on.mktw.net/3w64GIN
Around 50 years ago it happened to be clear that the world will never be the same place again. The internet revolution has started. Today, we are continuing that process and the pace of changes is only increasing. Overall trends show that the differences between technological domains are blurring to answer the demand of modern society. The result of that is the birth of IoT applications in various domains, both on the specialized and mass market.
This in turn gives tremendous business and career opportunities for all types of specialists from various domains, because:
There is no smart city without smart sensors.
Smart sensors need an embedded AI.
Embedded AI requires data scientists.
This short scene shows the mechanism on how different technologies (in this case AI) are intertwined with the IoT. Unfortunately, this causes a high degree of uncertainty and is making business/career decisions more complex. In order to act wisely, you need a good understanding of the roots of these technologies.
Internet of Things (IoT)
As you probably might guess the Internet of things is a concept that is a little mixed up. So, I would like to divide it into two components: the internet & the things to introduce it that way.
Firstly, we need to look at the very basics and acknowledge that the existence of our universe is possible thanks to the 4 fundamental forces of nature: Gravity, The weak force, Electromagnetism, and The strong force.
For us, the most important is Electromagnetism, which throughout the past centuries we were trying to tame and benefit from. It was pretty unsuccessful, until the 19th century when we started to process the energy in more sophisticated ways.
At that time we uncover the potential of radio frequencies and made developments like electric light, transformer, wireless communication, etc.
If you are familiar with the early history of the internet you probably recognize the name ARPANET, which was a military program developed to connect the university centers in the United States. The next step was the creation of Internet Protocol (IP), soon after that the 1G cellular system was born and the rest of the story you know.
As you can see the Internet was initially created to connect specialized machines with at least a bit of computing power such as computers and in the later steps mobile devices. On the other hand, back in the ’70s, there were initial applications of machine-to-machine communication (M2M).
You might ask, how is today’s IoT different, from what it was in the past?
A great answer to this question is a quote by iotforall:
„The biggest difference between M2M and IoT is that an M2M system uses point-to-point communication. An IoT system, meanwhile, typically situates its devices within a global cloud network that allows larger-scale integration and more sophisticated applications. Scalability is another key difference between M2M and IoT. „
Let’s quickly visualize the power of IoT in practice.
Giving the example of healthcare true flawless experience might look like that: mattresses monitoring your sleep, chairs monitoring your posture, various medical devices testing your condition, microphones collecting the data about your cough to monitor lungs conditions, etc. This in turn shows that we need billions of more connections than we have today and it only is possible with the new generation of telecommunication architecture.
The answer to this demand is an improved technology of the 5th generation (5G) of cellular communication protocols, rules, and methodologies issued by IEEE (Institute of Electrical and Electronics Engineers)
It is mainly focused on:
The wide availability of the signal (enhanced mobile broadband)
Significantly decreased latency compared to 4th generation technology, especially useful for applications requiring critical reliability e.g. remote surgical operations
Enhanced machine to machine communication up to 1 mln devices per 1 square kilometer
Technology still faces many challenges both in engineering and societal aspects. As you can see many challenges addressed by the IoT advocates are not going to be solved with the rise of 5G architecture. Cellular of the fifth generation might soon become outdated and that is why various R&D departments are already working on the 6G.
It is projected that the next few decades are going to be a constant transition process between new generations of cellular technology. Henceforth, it is obvious that 4G, 5G, 6G will overlap it selves and it would be hard to talk about one of them individually in terms of predictions. That is why I would like to describe more in detail the future potential business opportunities as the result of these three.
It is projected that by 2030 the overall telecommunication advancements will allow:
Transfer of senses like smell
I would like to disclaim that this is going to be my private observation, that with the rise of political trends and ideas to give every person a salary from the first day of birth, this will have hundreds of results (not rating good/bad) and one of them will be the increasing boredom. The rise of all sorts of gaming apps using this feature is just a marvelous way to create new experiences and disrupt the entertainment sector.
Inch perfect localization services
Space! The answer to an unbreakable internet connection all around the world. Satellite-based internet is a huge opportunity to enter growing markets like e.g. Nigeria. Many of the services that Google wanted to provide to this country failed due to low internet connection. In the next decade, it will change for the better. It is worth to mention almost everybody in Nigeria is a smartphone user.
Another aspect is the production and management of the data gathered throughout the satellites. Geospatial analytics is a great source of knowledge e.g. ability to measure the amount of gasoline in Saudi Arabian Oil Tanks and send this data to the NYC stock exchange to see how is the relation of stock prices to the real demand.
Not to mention GPS advancements that will make navigating, targeting, etc. more efficient.
How to benefit from it?
I hope that now, you have just a bit broader sense of how it all will be in the future and that now we are just starting.
Huge problems usually are followed by huge earnings if solved. Here is the list of problems and few interdependencies that might help build your future business model/career.
Connectivity — discovering the ways to establish a reliable wireless connection. As we already know, the connection is just a radio wave of a certain frequency, hence it can be modulated. This modulation could be „smart” in the future thanks to Deep Learning and Machine Learning.
Continuity — optimizing the battery life. How to predict long-term battery life by analyzing data from charge-discharge cycles?
Compliance — be up to date with evolving regulations. As AI is improving financial services compliance, it has a significant potential for smart management of constantly changing regulations.
Coexistence — billions of devices per km2 means billions of data that need to be processed. Smart sensors with embedded AI is a crucial element of industry 4.0
Cybersecurity — a secure connection is the most important part of IoT. Which technology will make it possible? Blockchain IoT has a projected 45% CAGR, what will be the successor of NFC (near-field-connection)? Many questions little answers = good business opportunity.
The transformation process will not happen overnight, 5G is very different from 2,3,4G because it is now when the Industrial revolution will be possible. Adding a coronavirus and the tendencies to move industry production from China back to Europe all give opportunities for businesses focusing on adapting Europe’s industry to new circumstances. All solutions connecting the old infrastructure with the new technologies are going to be crucial.
In the IoT world embedded AI will be used very often and a certain degree of independence requires applied algorithms to be reliable. To do that the algorithms will have to be certified. If you like law and science maybe such a research center is a good career choice.
As space will be a fundamental part of IoT systems and you would have an opportunity to work in a NewSpace company it is highly possible that you will benefit financially and you will have long-term stable employment.
Motion is everything. We are entering a time when the motion of people, cars, ships, etc. will be tracked and the data produced this way will be the new oil of our century. To benefit from it, you need to solve some issues named as 5 C’s of IoT and follow the newest media coverage about Industry 4.0 and the space race.
Artificial intelligence is one of the fastest growing fields today. It is currently being used in several disciplines, across the globe. However, this technology needs to be monitored to prevent any bias or negative impacts from affecting the world.
Responsible AI is developed to help prevent any harmful implications of AI, by having policies related to bias, ethics and trust. It is relatively new; however, many companies are favoring its incorporation into their infrastructure. Responsible AI caters to managing and regulating intelligent systems, to make sure they do not harm the society.
There are three major contributors to consider while determining if a certain piece of AI tech is suited for the society:
Awareness of the accountability of AI research and development is necessary. That is, who is to blame if an intelligent machine makes an error? The research should be capable of determining the possible effects of releasing a system into the world.
AI algorithms learn from the data they receive. However, they should be capable of reasoning and justifying their actions.
Transparency is required to make sure people know what a particular intelligent system does, and what it is capable of. It requires governance to ensure it delivers societal good.
Responsible AI research is being done across different platforms to devise rules and regulations to govern AI. RRI (Responsible AI Research and Innovation) is an interactive and transparent process that holds individual or groups of innovators responsible for the acceptability, desirability and sustainability of a certain technology in the society. It can be implemented between different parties using the following approaches:
Permanent individuals/groups from different backgrounds can discuss the innovation and its possible outcomes. This includes ethical review boards within organizations.
Set of rules and guidelines that should be followed by the outcomes of research and innovation, so its ethical, legal and safe.
Code of conduct detailing on the behavioral choices for stakeholders in different sectors
Industry standards that set a minimum threshold for the safety required for testing and development of new technology.
Approaches and methods of keeping track of the future impacts of a particular technology like scenario planning and modeling.
Since AI is prone to biases and misjudgments, RRI can ensure the technology is being utilized for the ultimate good of the world. At the end, human input is required to make sure technology is not going against humanity and that the processes are following the ethics, trust and bias standards. This improves accountability and promotes a better public image of such systems.
AI in the transportation field is believed to grow its market size to $10.3 billion in next 10 years. Many companies in trucking industry are going in direction of being fully automated. Due to law changes in some states in the US, self-driving trucks are going to be able to drive in a group of two or more. Waymo – american company (leader in autonomous driving technology) contacted with german company Daimler (truck industry) and they decided to collaborate and deliver Level 4 (High Driving Automation) vehicles. That means trucks will be able to react and intervene, if there is a dangerous situation on the road. However system does not require human interaction in most of the cases, it still has an option to manually override. For now Level 4 vehicles can only operate within urban area where average speed is 30mph. Waymo’s projects are now mostly focused on highway transportation and small freight carriers.
Before this collaboration american company has been testing their technology in driverless cars in Arizona. They started a project with fully automated taxi service. Why so many people see it as the future? AI is commonly used in predicting and detecting traffic conditions. Because of the highways, trucks with AI technology will not only save money and lower emissions, but also increase efficiency. Without any human interactions needed, we can manage our deliveries in a simplier way. In other way adding AI reduces operational cost. We have to hire less people and the machines are working every day without a break. With programmed possible paths of pedestrian and cyclists, the possibility of an accident will decrease.
Unfortunately, current technology cannot replace humans entirely. Beside driving, truck drivers are also taking care of loading, inspection of the vehicles and customer service. Issue which I should have mentioned first, is very high costs of creation and more important – maintenance. If a company wants the system to constantly observe the traffic and improve itself based on patterns, the software and hardware need to be updated very often. Another con is that machines do not have emotions, so whether we want or not, we have to program them to choose in a fatal situation. For example we have to decide, if we are making the driver’s or the pedestrian’s life a priority. AI is the next step in logistics and transport development. There will always be pros and cons of every progressive thought, but we can not be intimidated by it. With using out of box thinking, this technology could become very useful to us and help us improve our skills in managing communication.
Reading Time: 3minutesIt has been exactly a month since we knew the winner of the latest edition of TechCrunch Disrupt Berlin 2019. Congratulations to the newest Startup Battlefield winner, Scaled Robotics, who designed a robot that can produce 3D progress maps of construction sites in minutes.
Scaled Robotics wins the Startup Battlefield Source: https://techcrunch.com/2019/12/12/scaled-robotics-wins-startup-battlefield-at-disrupt-berlin-2019/
How does Scaled Robotics work?
The startup has created a robot that trundles autonomously around construction sites, using a 360-degree camera and custom lidar system to systematically document its surroundings. All this information goes into a software backend where the supervisors can check things like which pieces are in place on which floor, whether they have been placed within the required tolerances, or if there are safety issues like too much detritus on the ground in work areas. The data is assembled automatically but the robot can be either autonomous or manually controlled.
Why construction companies need Scaled Robotics?
Construction is one of the world’s largest but also most inefficient and wasteful industries. There are estimates that nearly 20% of every construction project is rework. The problem of waste and rework is so widespread that the industry on average operates on a 1-2% margin. The root of this problem stems from the fact that the construction industry is still relying on tools and processes developed over 100 years ago to tackle the problems of today. The robot can make its rounds faster than a couple of humans with measuring tapes and clipboards. Someone equipped with a stationary laser ranging device that they carry from room to room just works too slowly. Using outdated data is one of the main problems for developers. This is confirmed by the case that was carried out in one of the companies. One of the first times startup took data on a site, the client was completely convinced everything they’d done was perfect. Scaled Robotics put the data in front of them and they found out there was a structural wall just missing, and it had been missing for 4 weeks. Thanks to Scaled Robotics’ technology such situations do not take place.
Simultaneous location and ranging (SLAM) tech Source: https://techcrunch.com/2019/12/11/scaled-robotics-keeps-an-autonomous-eye-on-busy-construction-sites/
Technologies that support people’s work
There is no doubt that the entire competitive advantage of Scaled Robotics lies in innovative technology. An advantage of simultaneous location and ranging (SLAM) tech is that it measures from multiple points of view over time, building a highly accurate and rich model of the environment. Automated Construction Verification system with scans from traditional laser scanners, can verify the quality of the build providing high precision information to localize mistakes and prevent costly errors. What is more, Automated Progress Monitoring helps track the progress of the construction project and provides actionable information for site managers to prevent costly errors. By comparing this to a source CAD model of the building, it can paint a very precise picture of the progress being made. Scaled Robotics also built a special computer vision model that’s suited to the task of sorting obstructions from the constructions and identifying everything in between.
What Scaled Robotics did is that they rethought the entire construction process. Their mission is to modernize construction with Robotics and Artificial Intelligence, thereby creating a manufacturing process that is lean, efficient and cost-effective.
Does Scaled Robotics have a chance to revolutionize the construction industry on a global scale?