Author Archives: Gil Jakub

Computer vision Facebook research

Reading Time: 2 minutes

Machine learning is able to do many things as long as we have data to teach our model. Because of this, researchers are trying to make algorithms smarter and smarter and no longer need so much information to learn. Facebook as a huge collection of images is the perfect place to do research on this topic and improve algorithms. The main goal of the research is to teach the algorithm, so that the computer vision becomes self-steering, so to speak.

source: Facebook

Semi-supervised learning is not all that easy because at the current stage of development, algorithms can fill in data gaps by extracting information only from training sets. While this is fairly easy for text analysis, it is much more difficult for images and video. This is due to the fact that text is quite repetitive. Sentence structure and words are repetitive. In contrast, objects seen in images are much more varied. Other colors, objects, and surroundings are much more difficult to recognize than text. While this is not an easy task, the investigators have shown that it is possible and produces very visible results.

Facebook Ai

The DINO system is able to learn to find interesting objects in videos depicting people, animals and objects quite well without any labelled data. It achieves this by treating the video not as a sequence of images to be analyzed one by one. By paying attention to the middle and end of the video, as well as the beginning, the agent can get a sense of things like “an object with this general shape goes from left to right.” This information is used in other knowledge, e.g. when the object on the right overlaps the first object, the system knows that they are not the same thing, they are just touching each other in those frames. This knowledge in turn can be applied to other situations. In other words, the system develops a basic sense of visual meaning and does so with very little training on new objects.

Google Earth Timelaps

Reading Time: 2 minutes

There’s no denying that Google Earth, as well as Google Maps, provide enormous opportunities for Internet users. At any moment user can take a virtual walk around the Taj Mahal or change the perspective to the middle of the jungle in South America.

Google surprised us this time with a very interesting feature that was added this week. The function is called Google Earth Timelaps and it gives us the ability to trace the history of an area over the last 37 years using satellite imagery. This way, anyone can observe the interactive changes to the earth that have taken place over the past four decades.

Google video promoting the new feature:

Considering that the changes that have taken place on Earth in the last 50 years are unmatched by any period in human history, this feature shows how drastic the impact of human activities has become.

Animation shows glacier change in Alaska over the last 37 years

Google Earth Timelaps allows the viewer to look at how buildings, cities and rivers have changed using advanced 3D graphics rendering techniques. By rotating the model and changing the viewpoint, the user is able to view a time-lapse movie of superimposed images. Some of the most interesting phenomena to observe are the drying up of the Areal Sea, the expansion of the Dubai coast, the deforestation of the Amazon, but most importantly the melting of glaciers. All of this irrefutable evidence makes a very good case against skeptics of global warming and harmful human exploitation of the Earth. The addition of these features also provides a great deal of data for urban planners and environmental engineers.


Reading Time: < 1 minute

Mobileye, an Intel subsidiary, is ramping up its autonomous vehicle plans and expanding into delivery. 

The company announced on Monday that it had reached an agreement with Udelv to supply its self-driving technology to thousands of purpose-built autonomous delivery vehicles. The companies stated that they plan to deploy more than 35,000 autonomous vehicles known as Transporters on city streets by 2028.

Commercial operations will begin in 2023. Donlen, a commercial fleet leasing and management firm based in the United States, has put the first preorder for 1,000 of these Udelv Transporters. Udelv will collaborate with Mobileye to add self-driving technologies into its existing distribution management system.

Mobileye’s complete self-driving system includes 13 sensors, three long-range LiDARs, six short-range LiDARs, and six radar. It also includes the Israeli company’s EyeQ system-on-a-chip and the Road Experience Management, or REM, data crowdsourcing platform, which uses real-time data from Mobileye-equipped vehicles to create out a global 3D map.

Lifelikeness virtual characters in Fortnite

Reading Time: 2 minutesDigital humanoid characters in games industry are becoming more and more realistic, but  emotions on their faces come across as artificial and still leave something to be desired. Gamers are expecting to see game figures as something similar to film pictures and it is the challenge in itself. However, Epic Games started to face up this challenge.

Few days ago Epic Games declared that they will implement new system of emotion and movement projection. It is possible thanks to cooperation with company called 3Lateral. 

Tim Sweeney, Epic Games CEO announce that :

“Real-time 3D experiences are reshaping the entire entertainment industry, and digital human technology is at the forefront. Fortnite shows that 200,000,000 people can experience a 3D world together. Reaching the next level requires capturing, personalizing, and conveying individual human faces and emotions”

The 3Lateral studio specialises in 3D object scanning and transferring realistic motion into computer program. Besides that the company work on face rigging and character modelling using  simulation process. This is exactly what Epic Games needs. The most important thing to the Fortnite creator is to express real feeling to humanoid characters in the game. That’s why Epic Games gathered over 60 differentiated people to reflect behaviour of every race, gender and nation. Participant will be observed by sensors and cameras on every corner. The system will scan people’s acts, emotions on their faces or even throwing grenades action. The 3Lateral studio gives also possibility to scan body in clothes used in the game, thanks to that we can notice even small fold on costume while playing computer game.

Cooperation with Epic Games gives the opportunity to grow to the 3Lateral. Founder of 3Lateral development Vladimir Mastilovic expressed his view about their chance to grow:

“Observing, analysing and reconstructing these mechanisms has always fascinated us at 3Lateral, and we are excited to have joined a like-minded team at Epic Games with such strong desire to solve this near impossible problem.”

The video game market has a lot potential all around the world and number of gamers is still growing. That’s why graphic designers are doing everything they can to improve quality and make characters more realistic




The stress-relieving cedar egg

Reading Time: 2 minutesMost of the people have breathing problems in stressful situations, they usually are gasping for a cigarette as a way to calm down. However, those who care about their health more has breathe helping device called Kitoki now.

Kitoki looks like a larger irregular egg, with the white mouthpiece on one end. It is rather weird looking but has a precisely suited cedar chassis.  The device is rounded and matched to the shape of the hand. The Kitoki has also LED light which has a significant function:


“A tiny LED lights up when the device senses that you have taken a deep enough breath. The idea here is to prevent hyperventilation and promote calmness, and sometimes a cue can be helpful for that.”

How Kitoki works?

This small device was invented specially for those of us who have breathing problems in the nerve-wracking moments of our life. Kitoki helps you to settle down your breathing. Thanks to electrical measurement of sweat response it recognizes the intensity of our emotional state. This method is called galvanic skin response (GSR). Kitoki measures it by the small metal sensor located under the device’s mouthpiece.

While stressful situation comes you need to put Kitoki into a mouth, put your fingers on metal sensors and start to breathe. During this process, a device monitors your emotional state and gives a buzzing signal when you are completely calm.

Probably this device will not have a huge influence on our lives, but it is an interesting case for those who have nothing to do with their money.


Quote: Devin Coldewey




New Street View Trekker

Reading Time: 2 minutesA Few years ago Google Maps have started a Google Street View program. In the begging photos where uploaded only by users, who have wanted to share images from their traveling escapades.
Along with the development, Google surprised us with 360-degree cameras on the cars.

Unfortunately, cars are disqualified from taking pictures in hard-to-reach places. That’s why Google created Google street Trekker backpack, which gives the possibility to snap pictures in closed areas, for example in caves. An older version of Google street Trekker was clumsy and weighed around 18 kilograms. The design has also left a lot to be desired. The older one looked more like portable speakers than highly professional equipment.


Left side older version


Therefore Google came up an idea to redevelop Google Trekker. Google slimmed down older backpack and thanks to that the newer one is a little bit lighter. Improved hardware allows capturing better and sharper imagery.



Google assures also about backpack compatibility:

“Like previous Trekker generations, the new version can be put on cars, boats or even ziplines. This helps when capturing hard-to-access places, or when building maps for developing countries and cities with complex structures”


So who uses Google street Trekker?

This backpack enables to take photos in hard weather conditions during the explorations, for example in the Arctic or during a mountain climbing. This Google’s device is used, not only by travelers and film crews, who have need of professional equipment but also by tourism boards and transit operators.







Reading Time: 2 minutes

Facial recognition in Car Rental Industry


Car rental industry is one of the most prospective and likely-looking lines of business over the last decade. According to Zion Market Research conducted in 2017, the global car rental market will be worth around 124 billion dollars in 2022. In comparison to present-day value (around 70 billion $), it is a good chance to invest in. 


The main concern for car rental companies is tangled mechanism of registration, which discourage potential customers. Therefore Hertz rental cars step forward by implementing the new system of recognition an identity. Instead of seeking for an ID card in a wallet, a customer can just look at the camera next to window’s car and after that camera compares face with database and logs in 30 seconds.




It was possible due to cooperation with ClearCompany, which provides a software program dealing with scanning biometric face and prints.

In the beginning, using the facial recognition checkout will require membership of Hertz Gold Plus and of course sing up in the inClear-ClearCompany app. 


Last Tuesday, Hertz proclaimed a plan to install facial recognition points at airports around the country. The first point will come into being this week at Hartsfield–Jackson International Airport in Atlanta. The company claims that they will install 40 more units around other airports across the country in 2019. 



However, a facial recognition system is a little bit controversial because some community is anxious about their privacy and security of the database, which could be undoubtedly used illegitimately. But if anyone cares about that much? Almost every smartphone has face identity system. Comfort requires sacrifice.


Image: Hertz/Clean×534/filters:quality(90)/×9600/


DOCOMO and Toyota T-HR3 Humanoid Robot Using 5G

Reading Time: 2 minutes

        On November 29, 2018, NTT DOCOMO cooperating with Toyota Motor Corporation declared that they have created fully controlled the Toyota-developed T-HR3 humanoid robot, which uses 5G mobile communications. During the test robot was operated from a distance of around 10 kilometers in between two points, it was possible due to steering via 5G assistance.

The T-HR3 robot was presented almost year ago, but it was not put on test in remote control beyond lab conditions, because of the problem with lags. Toyota’s engineers solved also a problem connected with the long period of latency, which was noticeable matter in the old one. T-HR3 looks like taken out from Avatar movie and has almost the same steering system. 


Screen Shot 2018-11-30 at 11.09.08 AM


This humanoid robot is able to go through the motions of plugged in human. Robot is equipped with balance sensors, which guarantee a stability and freedom of movement. Deployment of hydraulic gears imitating human’s joins is also a surprisingly clever solution, that gives possibility to mimic even gentle and precise moves. Toyota claims that this robot can be used in healthcare and homes, but a vast majority of tech commentators consider T-HR3 as next super inteligent war machine. However, Toyota has shown a future direction of usage of 5G and introduced how to exploit the full potential of that technology.