Tag Archives: AI

Scaled Robotics – an innovator in the construction industry

It has been exactly a month since we knew the winner of the latest edition of TechCrunch Disrupt Berlin 2019. Congratulations to the newest Startup Battlefield winner, Scaled Robotics, who designed a robot that can produce 3D progress maps of construction sites in minutes.

Scaled Robotics wins the Startup Battlefield
Source: https://techcrunch.com/2019/12/12/scaled-robotics-wins-startup-battlefield-at-disrupt-berlin-2019/


How does Scaled Robotics work?

The startup has created a robot that trundles autonomously around construction sites, using a 360-degree camera and custom lidar system to systematically document its surroundings. All this information goes into a software backend where the supervisors can check things like which pieces are in place on which floor, whether they have been placed within the required tolerances, or if there are safety issues like too much detritus on the ground in work areas. The data is assembled automatically but the robot can be either autonomous or manually controlled.


Why construction companies need Scaled Robotics?

Construction is one of the world’s largest but also most inefficient and wasteful industries. There are estimates that nearly 20% of every construction project is rework. The problem of waste and rework is so widespread that the industry on average operates on a 1-2% margin. The root of this problem stems from the fact that the construction industry is still relying on tools and processes developed over 100 years ago to tackle the problems of today. The robot can make its rounds faster than a couple of humans with measuring tapes and clipboards. Someone equipped with a stationary laser ranging device that they carry from room to room just works too slowly. Using outdated data is one of the main problems for developers. This is confirmed by the case that was carried out in one of the companies. One of the first times startup took data on a site, the client was completely convinced everything they’d done was perfect. Scaled Robotics put the data in front of them and they found out there was a structural wall just missing, and it had been missing for 4 weeks. Thanks to Scaled Robotics’ technology such situations do not take place.

Simultaneous location and ranging (SLAM) tech
Source: https://techcrunch.com/2019/12/11/scaled-robotics-keeps-an-autonomous-eye-on-busy-construction-sites/


Technologies that support people’s work

There is no doubt that the entire competitive advantage of Scaled Robotics lies in innovative technology. An advantage of simultaneous location and ranging (SLAM) tech is that it measures from multiple points of view over time, building a highly accurate and rich model of the environment. Automated Construction Verification system with scans from traditional laser scanners, can verify the quality of the build providing high precision information to localize mistakes and prevent costly errors. What is more, Automated Progress Monitoring helps track the progress of the construction project and provides actionable information for site managers to prevent costly errors. By comparing this to a source CAD model of the building, it can paint a very precise picture of the progress being made. Scaled Robotics also built a special computer vision model that’s suited to the task of sorting obstructions from the constructions and identifying everything in between.

What Scaled Robotics did is that they rethought the entire construction process. Their mission is to modernize construction with Robotics and Artificial Intelligence, thereby creating a manufacturing process that is lean, efficient and cost-effective.

Does Scaled Robotics have a chance to revolutionize the construction industry on a global scale?



[1] https://www.scaledrobotics.com/

[2] https://techcrunch.com/2019/12/11/scaled-robotics-keeps-an-autonomous-eye-on-busy-construction-sites/

[3] https://techcrunch.com/2019/08/02/digitizing-construction-sites-with-scaled-robotics/

[4] https://techcrunch.com/2019/12/12/scaled-robotics-wins-startup-battlefield-at-disrupt-berlin-2019/

[5] https://pitchbook.com/profiles/company/279687-25

[6] https://angel.co/company/scaled-robotics

[7] https://www.theburnin.com/startups/scaled-robotics-wins-techcrunch-disrupt-battlefield-3d-construction-site-progress-maps-2019-12/

Tagged , , , , ,

Duolingo – the best machine learning startup to work for in 2020

Duolingo is starting the year off strong. They have been named one of the top startups to work for, in the growing field of machine learning. These and many other insights are from a Crunchbase Pro analysis completed using Glassdoor data to rank the best machine learning startups to work for in 2020. Why is Duolingo a unique company?

Duolingo logo
Source: https://www.duolingo.com/

Duolingo AI Research

Duolingo AI Research is one of Duolingo’s fastest-growing teams. They are using real-world data to develop new hypotheses about language and learning, test them empirically, and ship products based on their research. Duolingo has revolutionized language learning for more than 300 million people around the world. They keep on bringing creative, interdisciplinary ideas on how to deliver a high-quality education to anyone, anywhere, through AI.


Duolingo AI team logo
Source: https://research.duolingo.com/


Tools and data from Duolingo

Duolingo use AI to adapt longer learning content to learners’ level. The startup is regularly releasing their internal tools to the public so everyone can read more about their research innovations. One of them is CEFR Checker. This tool determines whether texts are appropriate for beginner, intermediate, or advanced learners of English or Spanish. It works by analyzing vocabulary and highlighting words by their reading proficiency level according to the Common European Framework of Reference (CEFR). Duolingo uses interactive tools like this one to help people revise content (e.g., Podcasts and Stories) for particular levels.

The Duolingo CEFR Checker: an AI tool for adapting learning content
Source: https://making.duolingo.com/the-duolingo-cefr-checker-an-ai-tool-for-adapting-learning-content

The Duolingo CEFR Checker: an AI tool for adapting learning content
Source: https://making.duolingo.com/the-duolingo-cefr-checker-an-ai-tool-for-adapting-learning-content


Duolingo is also committed to sharing data and findings with the broader research community. SLAM Shared Task is an example project. It contains data for the 2018 Shared Task on Second Language Acquisition Modeling (SLAM). This corpus contains 7 million words produced by learners of English, Spanish, and French. It includes user demographics, morph-syntactic metadata, response times, and longitudinal errors for 6k+ users over 30 days.


Why people should consider working at Duolingo?

The language-learning app Duolingo is valued at $1.5 billion after a $30 million investment by Alphabet’s CapitalG. Bookings growth has risen from $1 million to $100 million in less than three years for the most downloaded and top-grossing education app worldwide. What is more, Pittsburgh’s first venture capital-funded $1 billion start-up plans to increase staff by 50% with the new funding. Duolingo has been adding user and revenue at an impressive pace, continuing to solidify its position as the No. 1 way to learn a language globally.


Why people should consider working in the machine learning field?

Demand reminds high for technical professionals with machine learning expertise. According to Indeed, Machine Learning Engineer job openings grew 344% between 2015 to 2018 and have an average base salary of $146,085 according to their Best Jobs In The U.S. Study.

It can be safely stated that Duolingo is developing very dynamically. There is also no doubt that the rapid growth of a startup also means the development of its employees.

Would you choose to join Pittsburgh’s unicorn if you had such a chance? What do you think about Duolingo’s contribution to the development of the education sector?



[1] https://www.forbes.com/sites/louiscolumbus/2020/12/29/the-best-machine-learning-startups-to-work-for-in-2020-based-on-glassdoor/#71505e744886

[2] http://blog.indeed.com/2019/03/14/best-jobs-2019/

[3] https://www.cnbc.com/2019/12/03/google-funded-duolingo-first-1-billion-start-up-from-pittsburgh.html

[4] https://making.duolingo.com/the-duolingo-cefr-checker-an-ai-tool-for-adapting-learning-content

[5] https://making.duolingo.com/how-machine-learning-helps-duolingo-prioritize-course-improvements

[6] https://cefr.duolingo.com/

[7] https://research.duolingo.com/

Tagged , , , , ,


If you’re familiar with sci-fi anthology series called Black Mirror, you might think of one of the episodes from the 5th season titled “Metalhead”. Apparently, it’s not a fiction anymore, it’s our today’s reality.

The robot dog named Spot is an invention that Boston Dynamics started first developing out of MIT. According to the state’s nonprofit branch called American Civil Libertie Union, these robots are now working with the Massachusetts State Police’s bomb squad.
The ACLU  accessed a memo of agreement document between the state and Boston Dynamics through a public records request.
The request letter that the organization wrote in the records is the following: “The ACLU is interested in this subject and seeks to learn more about how your agency uses or has contemplated using robotics.”.
The ACLU collected all the valuable information about the new partnership, including that Boston Dynamics leased the Spot robot dog to the police force for 90 days between August and November. Because there is no detailed information revealed for a public eye, we don’t know how they are exactly using these machines. The only information that a state police spokesman David Procopio provided about Spot is:  “for the purpose of evaluating the robot’s capabilities in law enforcement applications, particularly remote inspection of potentially dangerous environments.”.
Michael Perry, Boston Dynamics vice president of business development stated, that the company is aiming to make Spot useful for different areas like oil and gas companies, constructions or entertainment.
Perry said he anticipates, that the police is using Spot by sending it into areas that are too dangerous for human being.

The abovementioned robot dogs are constructed for a general-purpose. They have an open application programming interface, which means, a warehouse operator or in this case, a police department can customize them with their own software. From what we can read on the internet, State police claims they didn’t use that feature yet.
Even though, Perry claims the robot won’t be used in the way that would harm or intimidate people, the ACLU, as well as the internet community, are worried about the situation. Currently, the major issue is the lack of transparency in the overall robotics program.

There are various conspiracy theories made by netizens. They mostly predict worst-case scenarios.
The question is whether this invention is safe for the human race. But let’s face the truth. Everything could be dangerous if used in the wrong way. If people working on these machines will program the algorithm allowing them to shoot to people, they’ll follow the order.
Personally, I’m amazed and don’t really know which adjective to use other than “amazing” in this case. I applaud Boston Dynamics for creating the algorithms of their breathtaking machines.

Continue reading
Tagged , , , , , , ,

Paralysis and AI

Every year, thousands of people around the world experience various neurological diseases like stroke and spinal cord injuries. Due to these diseases, many of them have paralysis. Such people are almost completely isolated from the social life, from any communication with doctors and relatives. That is why it is sometimes even impossible to avoid the usage of expensive equipment. There are already technologies in the world for reading thoughts and turning them into text messages at up to eight words per minute, but recently, scientists from the US state of Illinois have been able to improve this indicator. Artificial intelligence helped them greatly. 

ScienceMag introduced this technology. In their article they described the experiment in which a new technology returns the possibility of communication to the patient with the so-called tetraplegia. During the experiment, patient with implanted electrodes imagined how he would move a hand if he wrote letters. Certainly, during this process brain showed some activity, which ,as the result, was remembered by AI. Then computer was able to remember the place of the activity in the brain connected with particular letter of the alphabet and was able to display symbols alternately that patient mentally traced on the screen.  

According to the scientists, AI is able to recognize symbols with 95% accuracy. AI make several mistakes only with similar letters like «g» and «q», for example. Regardless this, now paralyzed person can text with the speed of 66 words per minute. To compare, the speed of texting of healthy person is 120 words per minute.

By the way, thoughts can be even transformed into the speech.

According to the editors of ScienceMag, researchers from Germany and the USA used some computational models based on neural networks, they reconstructed words and sentences by reading brain signals, as it was mentioned before. So the system is the same, they just observed areas of the brain at those moments when people read aloud, speak, or simply listen to notes.

During this research, they relied on data obtained from 5 people with epilepsy. The network analyzed the behavior of the auditory cortex (which is active both during speech and during listening). Then the computer reconstructed the speech data from the pulses received from these people. As a result, the algorithm coped with an accuracy of 75%.

Another team relied on the data of 6 people, that experienced the removing of brain tumor. Microphone picked up their voices when they read out loud different words. While this process, the electrodes recorded information from the speech center of the brain. Then computer compared the data from the electrodes with audio recording. Only 40% was correct as the result. 

The third team from the University of California reconstructed entire sentences based on brain activity from three patients with epilepsy who read specific sentences out loud. Some sentences were correctly identified in more than 80% of cases.

Regardless such appealing results, the system has a lot of shortcomings and is needed to be adjusted. However, it will be developed even more, so millions of people will have an opportunity to text and to speak once again.


  1. https://hi-news.ru/research-development/najden-sposob-prevrashhat-mysli-v-ustnuyu-rech-govorit-dlya-etogo-ne-obyazatelno.html
  2. https://hi-news.ru/technology/iskusstvennyj-intellekt-pomogaet-paralizovannym-lyudyam-pisat-ot-ruki-pri-pomoshhi-mysli.html
  3. https://www.sciencemag.org/news/2019/10/ai-allows-paralyzed-person-handwrite-his-mind 
Tagged , ,

Artificial Intelligence Creates Artificial People

What we see above is a bunch of different average people right?
The catch is none of them is real… 

Everyone knows that AI has been implemented in diverse applications, not only autonomic cars, management systems for factories or chatbots we see everyday. Recently it was implemented to a very specific application, namely creating nonexistent people and their faces. It might sound like a useless whim of bored programmers although it actually might be very useful for marketing agencies or graphic designers as they are royalty-free and available for everyone to use.

But how does that work?

Artificial intelligence has made it easier than ever to produce images that look completely real but are totally fake using Generative Adversarial Networks (GAN), a relatively new concept in Machine Learning, introduced for the first time in 2014. 

The essential components of every GAN are two neural networks:
-Generator that synthesizes new samples from scratch, a random vector (noise) so the initial output is also noise.
-Discriminator that takes samples from both the training data and the generator’s output and predicts if they are genuine or counterfeit.

Over time generator, as it receives data from the discriminator, it learns how to create more realistic images. Moreover discriminator also learns and improves by comparing synthesized photos with real images. In other words one network generates a fake face, while another decides if it’s realistic enough by comparing it with photos of actual people. If the test isn’t passed, the face generator tries again. To see yourself how well it works go to: https://thispersondoesnotexist.com.
Every time you refresh the page, you get a newly generated face. 

And if you get bored with fake faces you can always admire some AI generated cats: https://thiscatdoesnotexist.com, but in my opinion sometimes it gets quite creepy as it’s not as advanced as the system mentioned above… 


-Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio; „Generative Adversarial Nets”
-Tero Karras, Samuli Laine,  Timo Aila NVIDIA; „A Style-Based Generator Architecture for Generative Adversarial Networks”

Tagged , , ,


On 23rd of September this year, Vice President of augmented and virtual reality
announced Facebook’s agreement of acquiring CTRL-labs.

CTRL-labs is tech start-up that is still in process of developing wristbands that
would allow us, human beings connect and control digital system just by using
our intuition.
It’s quite fascinating how our brain signals would have conduct computers without
any physical interaction.
I won’t bore you with intricacies of detailed structure and function of it, but those
who are interested can click here and see the start-up CEO’s presentation.
He explains how these bracelets would actually work(go to 5:40).

As you would expect, some people are not so happy about the idea of
Facebook having access to the data of people’s thoughts after platform’s
scandal involving their unethical behavior by sharing Facebook users data with
third parties without permission.

Please leave your opinion about platform’s new acquisition.
How Facebook’s access to our nervous system can affect the reality?


  1. https://techcrunch.com/2019/09/23/facebook-buys-startup-building-neural-monitoring-armband/
  2. https://siliconcanals.com/news/startups/facebook-acquires-mind-reading-startup-ctrl-labs/
  3. https://www.youtube.com/watch?v=D8pB8sNBGlE
  4. https://www.facebook.com/boz/posts/10109385805377581
Tagged , , ,

Lightning fast MIT Robot

MIT is worldwide known for its robotics research. In the past, they managed to create a robot breaking the world record in solving a Rubik’s Cube in only 0.38 seconds, the first four-legged robot to do a backflip, etc.
Sounds impressive, but with their newest development, they can’t just look cool, but it can be used in many ways and bring the robots on to the next level of productivity.
Picking up objects and flipping them around is easy for people. We do it every day, e.g., when we are trying to take notes at University, work, or at home. We pick up the pen and bring it into the right position and start writing. The same scenario is when we are eating a sandwich: We move it a little bit to bite from the other corner.
For our robot friends, however, it is tough to pick up things without either dropping them or destroying them and then to add the factor of turning the object they just mastered to hold? It sounds like a difficult task.
Therefore it took robots a long time to plan and calculate all the factors like geometry, friction, all the possibilities of how the object can be turned, etc. This whole process took tens of minutes previously, which sounds still impressive, bearing in mind that if we measured and calculate these numbers, we would sit there for hours and probably still fail.
MIT mastered to bring down the planning time of the robot to less than a second.
How is that possible? The robot is pushing the object against a stationary surface and slides its claw down the object until it has it in the right position.

For the future, this can mean that instead of a specialized tool like a screwdriver, machines would have more something like a hand, giving them the ability to pick up different kinds of tools and do various tasks.
This improvement would most likely save the companies space and also money since, for multiple steps, they would need one robot.

This is another case were thinking out of the box, by simply using the surroundings, has a huge effect.






Tagged , , , , , , ,

The use of facial recognition technology on birds

Today, I want to demonstrate you a great example of how object recognition technologies based on machine learning:

1) becoming widely available and do not require rare genius programming skills to get the result.

2) can be greatly trained even on a very modest in size data sets.

The article, that I have read some time ago, tells how a bird lover and part-time computer science professor, together with his students, taught the neural network to recognize the bird species and then — and that impressed me a lot – to distinguish individual species of woodpeckers, who flew to the bird feeder in his yard.

At the same time, 2450 photos in the training sample were enough to recognize eight different woodpeckers. The professor estimated the cost of a homemade station for the recognition and identification of birds at about $ 500. This is really can be called technology for everyone and machine intelligence in every yard.

Moreover, this technology can really help birds. As Lewis Barnett, the inventor of this technology wrote in his article : «Ornithologists need accurate data on how bird populations change over time. Since many species are very specific in their habitat needs when it comes to breeding, wintering and migration, fine-grained data could be useful for thinking about the effects of a changing landscape. Data on individual species like downy woodpeckers could then be matched with other information, such as land use maps, weather patterns, human population growth and so forth, to better understand the abundance of a local species over time»

As some people correctly noted, this technology has also some great commercial potential. Just imagine that camera traps will be able to recognize birds that harm your fruit trees and than activate  a device that make a large noise to scare away pests.



Tagged , , ,

Face-Scanning A.I. can identify rare genetic disorders

Facial recognition can unlock your phone, help to fight crime and as the time passes, can be used in many more more cases. Now it can even diagnose certain genetic diseases based on people’s faces thanks to the DeepGestalt.

Rare disorders often show up in someone’s appearance and because of that you are sometimes able to identify the condition of the patient based of his facial traits. Researchers have trained artificial intelligence to recognize these features making it possible  to make a quick and cheap diagnosis. They trained the neural network, called DeepGestalt, with the pictures of T 17,000 kids with 200-plus genetic disorders. In a test with 502 new images, DeepGestalt successfully placed the correct syndrome in its top 10 list 91% of the time outperforming the doctors in this field.

“DeepGestalt is a facial image analysis framework that is able to highlight similarities to hundreds of genetic disorders” – Yaron Gurovich, chief technology officer at FDNA

This tool in combination with genome testing can easily be used to help doctors search for specific genetic markers and make an accurate diagnosis much more efficiently and effectively. And even though there is still a long way for it to be perfect, there is a huge chance that it might be once used in everyday life.

On the other hand, given how easy it is to photograph a face, the tool could be abused by employers or insurance providers to discriminate against people with a high probability of having certain disorders. Luckily, it is said that the tool will only be available to clinicians.






Tagged , , , ,

AI creates images of food that doesn’t even exist

A team of researchers from Tel Aviv consisted of three people taught neural network a very interesting nontrivial skill: generate images of ready-made food from the recipe text. Literally it means that this AI can take any text which contain different ingredients and figure out what the finished food product will look like.

However, the results are not very reliable yet – in the sense that the images of really cooked dishes are sometimes quite different from what the network imagined based on reading the recipe.

Researcher Ori Bar El told:

«It all started when I asked my grandmother for a recipe of her legendary fish cutlets with tomato sauce. Due to her advanced age she didn’t remember the exact recipe. So, I was wondering if I can build a system that given a food image, can output the recipe. After thinking about this task for a while I concluded that it is too hard for a system to get an exact recipe with real quantities and with “hidden” ingredients such as salt, pepper, butter, flour etc.

Then, I wondered if I can do the opposite. Namely, generating food images based on the recipes.  We believe that this task is very challenging to be accomplished by humans, all the more so for computers. Since most of the current AI systems try replace human experts in tasks that are easy for humans, we thought that it would be interesting to solve a kind of task that is even beyond humans’ ability. As you can see, it can be done in a certain extent of success.»

Although these images are far from real, tests on people showed that they like the generated pictures and they find them even appetizing. The authors of the article, which was posted on TNW web site, find this result depressing. As you know, there is a lot of food pictures in Instagram, and now there is a danger that these pictures may turn out to be fake. For many people, it is even more distressing than fake news or portraits of non-existent people. It seems that nothing sacred is left in this world where even a photo of snacks can turn out to be a realistic-looking fantasy of a neural network.






Tagged , ,