Tag Archives: google

Data is a new oil?

Until recently, companies managed just traditional assets such as machines, money and intellectual property. The digital era brought a new type of asset – data. This is the raw material from which forecasts, insights and very big money currently are made.

Big data is becoming the main driver of growth for companies and a new resource for the economy. Companies collect data on customer behavior and equipment operation along the way creating new services based on received information.

The only problem is that people usually are not aware of what data is collected from them which creates many legal disputes on whether companies allowed or not to “spy on their clients”. With adoption of data protection rules in many countries all around the world, tech giants such as Facebook, Google or Amazon are facing a real threat for their businesses.

The common phrase “data is new oil” has become dangerous for companies whose business depends on third-party data. In my opinion, comparison is not completely wrong because who controls the data controls the entire market. But for tech giants, comparison with oil barons can result in image deterioration, luck of trust and loss of customers.

Because of that Google’s chief financial officer, Ruth Porat, speaking at the World Economic Forum in Davos, tried to popularize a more upbeat way of describing data: “data is more like sunlight than oil,” adding, “It is like sunshine — we keep using it, and it keeps regenerating.”

It is clear that if Google is considered to be an environmentally friendly solar power station, and not a vertically integrated oil company, then so many questions are immediately removed. I do not think everything will work out right away, but the attempt is worthy: the desire to compare technology companies with oil bars is also exaggerated and very incomplete analogy. Maybe it will end up finding an adequate middle.



Tagged , ,

Alibaba’s AI Customer Service is much better than Google Duplex

A long-standing goal of an interaction between humans and computers has been to enable people to have a free conversation with machines, as they would with each other. In recent years, we have witnessed a revolution in the ability of computers to understand and to generate natural speech, especially with the application of deep neural networks.

One of the inventions in this area was Google Duplex. As you probably know, Duplex is a technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine. For example, Duplex can automatically reserve a table for you in a restaurant, using a phone call to a manager.

While Google is still testing and developing their new system on a small amount of Pixel phone Users, another giant tech company Alibaba already has a working model. It is used not for restaurants, but for an even narrower niche – the delivery of goods. At an annual AI research gathering, the e-commerce giant demoed a sample conversation where the voice-assistant was tasked to ask a customer where the package should be delivered.

The most amazing thing is that Alibaba’s voice assistant was able to deal with some controversial situations during the dialog such as interruption (pauses), nonlinear conversation (customer starts a new line of inquiry), and implicit intent (customer doesn’t explicitly says what he actually means). It is an amazing new which also once again underlines the superiority of China in the field of artificial intelligence, by the way. Currently, the agent is used only to coordinate package deliveries, but it could also be expanded to handle other topics.








Tagged , , , , ,

How Google was trying to hide the Dragonfly project

For the first time, information that Google is developing a censored search engine codenamed Dragonfly was revealed in the late summer of this year. It was reported by The Intercept with reference to internal leaks. And, as it turned out, an aura of secrecy has been around the project for several years.

This is not unusual for new movies, games, services, and so on. However, in this case the situation was different. The task of Dragonfly was monitoring and filtering information because the customer was the Chinese government.

Inside the company, about the project became known in February 2017, although some top managers were discussing this issue since 2016. Top executives told the engineers that the search engine infrastructure would depend on a Chinese partner company with data centers in Beijing or Shanghai. The latter means that the Chinese government could receive any data from these servers. In addition, the system must associate user’s requests with their phone numbers. And considering a fact that in China exists a database with telephone numbers of all people, this means an unequivocal identification of all users.

One of the participants in the Dragonfly was Yonatan Zunger, who was offered to work on a search engine secret project. But after some time, he declared to management that such a system is unacceptable and would violate human rights in China. In response to these claims, the head of the “Chinese” direction, Scott Beaumont, did everything so that Zunger and other dissenters no longer participated in the project and knew nothing about it.

In the same 2017, Zunger left the company but three of his followers are still working there and agreed to disclose some new details, of course, on conditions of anonymity. In particular, according to them, the participants and developers of Dragonfly held meetings with a high secrecy regime. There were no written notes or other open communication. At the same time, out of 88,000 Google employees, only a few people knew about the essence of the project. Some were threatened with dismissal if they discussed the essence of their work with colleagues. According to one of the company’s employee, management took all measures to minimize data leaks.

After disclosing information, some human rights groups such as Amnesty International and Human Rights Watch openly condemned the project and said that the company could become an accomplice in human rights violations in China. Later, the US Senate joined the case, and Vice President Michael Pence demanded that Google stop developing Dragonfly.

However, despite the internal and external resistance, the project was launched. And recently, the company officially confirmed the fact of development. Google executive director Sundar Pichai said that the corporation is considering the possibility of returning to the Chinese market, from which the company was forced to left in 2006 due to a number of blockings.

At the same time, according to The Intercept, co-founder Sergey Brin was personally interested in returning to Chinese territory, as Beaumont said. Allegedly, the co-founder of the company met with high-ranking Chinese officials to discuss the return. At the same time, Brin himself had previously denied the possibility of censorship and claimed that he had learned about Dragonfly only from the media.

One way or another, but the situation around the “censor” search engine has split Google. Recently, a number of employees signed an open petition against it. It is not clear how this story will end, but it can seriously shake Google’s market position.







Tagged ,

A system that makes calls for you – Google Duplex

Are you the kind of person that hates phone calls, or do you have too little time to handle them all?

Google has announced a system that will solve your problem. It is finally possible to have a natural conversation with computers. Google’s new AI system, Google Duplex, sounds as natural as a real human being! It understands, contextualizes and has correct timing.

Here is an example of how Duplex could schedule an appointment at the hair salon for you.


Of course, this could be invaluable in the business world. Duplex could:

  • allow organizations to reduce their customer service teams,
  • allow customers to book at any time, even in the middle of the night,
  • remind customers about their upcoming appointments and reduce no-shows,
  • give information that is not available online – and upload the information online to reduce similar calls.


One would expect that a system like this would have a lot of bugs and would not work in the real life. Watch Duplex talking to a woman that barely speaks English and making an appointment at a restaurant.


For normal users Duplex could also be very useful and could save people time. This graphic shows an usual way to use Duplex.

According to www.wired.com the technology is, unfortunately, not perfect yet. WIRED writer, Lauren Goode, answered a call in a demo in June. She was very surprised that Duplex sounded just like a real person and she thinks this can be disorientating. She managed, however, to confuse the bot by saying something about allergies in the middle of a discussion about available times for a restaurant reservation.

According to neowin.net, there is also going to be a new feature that works together with Duplex. It enables the Assistant to respond to and reject unwanted commercials with minimal imput. It will also provide transcripts.

Google has announced that Duplex will be available on their smartphones, Pixel. However, this is for now only available in a few big cities in the US. There is no information on when, or if at all, it is going to be launched in Europe, but for the moment it’s still considered in the experimental phase. 


Do you think that Google Duplex will become a revolutionary system? What would you use it for? Let me know in the comments!


More info at:

AI Google Blog, 25th November 2018

Wired, 25th November 2018

TechRadar, 25th November 2018

VentureBeat, 25th November 2018

Business Insider, 25th November 2018

Neowin, 25th November 2018

Tagged , , , ,

Ivyrevel Digital Fashion House: Discovery or Empty Promise?

Ivyrevel is a clothes brand founded in 2003, which officially is a part of H&M global business group since 2016. It was co-founded by popular Swedish fashion blogger Kenza Zouten Subosic, and have been heavily promoted by her through social media channels as well as through her blog (kenzas.se). Ivyrevel already has been on the market for several seasons, but last year the founders have decided to offer a brand new and innovative relaunch that will turn it into the first truly digital fashion house. H&M is a major investor to offer strategic and production support. Another partner is Paypal, which is supposed to provide advisory services on payment and distribution issues.

Ivyrevel clothes are inspired by urban contrasts, strong independent women and by the contemporary now. Its style is polished, loud and extrovert with feminine silhouettes and ample amounts of attitude. It follows fashion trends and continuously releases new items with no pause, by reacting to latest fashion drifts.

The creators of the brand say that their site is their palace, their home, their one and only shopping window to the world. Consequently, you can find Ivyrevel clothes only online and there is no single physical store or a showroom to come try the clothes. Nevertheless, Ivyrevel ships worldwide and their clothes can be bought through many various e-commerce platforms, such as German Zalando or British Asos, and with the support of such a giant as H&M, it makes Ivyrevel a recognizable brand all around the world.

Using the hype around the brand relaunch, Ivyrevel announced that they are partnering with Google to bring couture into the digital age with the Data Dress – a personalized dress designed using a smartphone app developed by Ivyrevel and Google. The app tracks each user’s activity and lifestyle which is then interpreted into a truly unique, and on-trend, custom-made Ivyrevel dress.

”It’s such an exciting moment. We’re about to change the fashion industry by bringing the customer’s personality into the design process through data technology. To get a unique piece of clothing today you need to either buy a custom-made design piece or design it yourself, but that is generally not an affordable option and most people lack the design experience. The Data Dress enables women around the world to order a dress made entirely for them, that reflects the way they live their lives,” says Aleksandar Subosic, co-founder of Ivyrevel.

You ask how does it actually work? At 2016 Google I/O developer conference, Google introduced a new Awareness API that would allow for smarter applications that could understand where you were, what you were doing, what’s nearby, and even the weather, in order to more intelligently react to your current situation. In 2017 Google introduced a new application that’s taking advantage of this sort of data in order to…design you a dress.

Through a forthcoming Android application, users can consent to have their activity and lifestyle data monitored – by way of the Awareness API –  to create their own, personalized, custom-made dress that’s ordered through the app. Now it is officially called the “Data Dress,” says Google.

The idea is that you can translate your life and your lifestyle into a unique, wearable look sounds promising, but in reality, the resulting creation mainly displays your routes and routines as lines on a map, sans street labels and points of interest. But who would like to wear a roadmap?

Creators have promised that the app will release later this year, but still, there are no signs of progress.






Tagged , ,

Better than Google Image Search

Many people are familiar with google image search feature where you can actually upload your own image and then search that image in google to find similar images.

Just go to https://images.google.com/

You should see this…screen-shot-2017-01-23-at-2-57-16-pm

This will allow you to do a lot of things, like search for your brand logo, Ensure that companies aren’t stealing images and pretending their products are real. You know, things of that nature.

What if I told you there was a better and more thorough option? Google image is good for some novel image searches but Monitori goes way beyond Googles capabilities. With Monitori it is not simply just a social media analysis tool. At first glance it seems pretty similar to Brand 24 but they don’t have the image function that Monitori does.

With their search feature you can upload an image of your brand logo and it will return with a query of every place that it can find where your brand image shows up online. I tested this feature with a very small and not well known brand and to my surprise there were results! This is going to change the way companies function for sure and it is a powerful feature of this SAAS company.


Tagged , , ,

Lost in translation no more? Google’s AI invents own language

Google Translate has in all of its ten years of existence been a quite useful tool – appreciated by some (especially those who know about the difficulty of machine based translation), ridiculed by others, used by most. It has though never really lived up to professional standards. And, to take that upfront, that hasn’t changed since you last used it half an hour ago. But Google is about to use its main advantage. Its data. Its 140 billion translated words daily.

In order to do that, the ‘machine’ had to be made adaptive. One main obstacle down that road was that, considering 103 supported languages, the idea of creating comprehensive language pairs (meaning direct translation from each one language to possibly every other supported one), was close to impossible. The translation would usually go via English. And thereby any real progress was hampered because even if you managed to improve the result, what you would have improved would not have been the link between the two target languages but between either or both of those and English. As soon as you took English out of the equation, the progress would probably be lost. Translation is a delicate field, if you aim for good results.

What you want to do instead, according to American and German researchers, is creating a so-called neural network. A new common language that may serve as a link between languages that the machine has not been trained to directly translate. If it was able to translate between Hindi and Hebrew and Hebrew and French before, then it can now also translate between Hindi and French without any middle step. Another reason why neural network systems have hailed as the new star in the machine translation universe is that they take into account contextual meaning. Before, and still to an extent because all machine based translation by definition has to be statistical if doesn’t want to be random, a sentence was translated by translating the single words or idioms individually and then putting them together. This is how things like this happen:



This brings us to the main point: Google’s new common language (or any neural network language for that matter), other than English, can be changed, improved, adjusted with the help of the users’ requests.

Which tempts some to the conclusion that for one of the first times, AI, Artificial Intelligence, is actually at play in a user-related field. But that’s a topic for another day …

Tagged , ,

Profesional weareble gadets

Probably everyone who is at least to small extent interested in technology is aware of personal wearable gadgets like fitness trackers, smartwatches or VR headsets getting more and more popular. Though probably not many of you know how potent market for workday gadgets is.

Those gadgets could be smart glasses providing a scheme of the assembly to manufacturing worker; wearable computers with which whole communication is via voice; helmets or caps with sensors notifying if a worker falls asleep. All of them has one thing in common: They are about to improve the efficiency of a worker by cutting costs or increasing work output. Those products seem to be easier to sell as they do not need to define problems they are solving (which is the case with consumer-oriented products) rather they are solving well defined existing problems.


The first gadget of this type worth knowing about are helmets and cups from SmartCap Technologies. They look like regular safety helmets or trucker caps but have the additional function. With use brain wave sensors (like EEG) they can detect when a worker is in danger of falling asleep and in such a case trigger alarm on his smartphone and when the danger is extreme, it can notify his supervisor.

This function is extremely important in the case of mining industry truck drivers. They are in control of trucks the size of a family house and if they fall asleep even for second consequences could be tragic.

This makes them sell rather easily as it is less expensive to buy those safety helmets rather than covering losses created by the accidents. SmartCap is charging $150-$200 a piece plus $30-$50 monthly subscription fee for the app and they are selling well.


The other gadget is clip-on voice controlled computer by Theatro created for sales clerks or cashiers. The device lets its user check inventory, check code of a product, communicate with other employees or find out when do they have their next break. Why is it much better than an app on the smartphone? It is because they do not need to avert their eyes from a customer which as Mr. Fitzgerald says “most customers when they see somebody doing that, they feel like they can go look at their own screen and get their own information.” This causes loss of a sale and in fact productivity drain rather than productivity gain.



Other worth mentioning gadgets are interactive textiles allowing to communicate with the device via touching work clothing by Google Project Jacquard and Cintas (for hospital employees and patients); Head mounted cameras for field service technicians allowing them to ask for advice without the need to use their hands which are currently occupied by the task their doing.








Tagged , , ,

Inspiration & Motivation in new tab of your browser

For many of us, the browser has already become the main working tool, and it has its advantages and disadvantages. We can work on any computer from anywhere, but we have to fight with a lot of distractions. So this article will tell you how to counter them (I mean distractions) without losing motivation and inspiration, with extensions for Google Chrome.

  • Momentum

momentum extension

Momentum refers to you by name and makes a new tab in a beautiful start screen with useful information such as weather and time, flavored motivating quote. From here there is an additional list, search and quick links that can be configured to access the necessary resources (just do not add to Facebook).

Momentum Extension





  • Motivation

chrome-motivation-procrastinateThis extension will ask your date of birth and turn the browser into the unforgiving timer showing your age in years up to 0.000000001. When looking at a flying time lost all desire to wast
e it!







  • Mortality

Mortality Chrome ExtensionAnother strong expansion that’s great motivation for action. As Motivation, Mortality shows the timer counting your years with millisecond precision, and even displays the living and the remaining time (based on the average life expectancy of 80 years). Each circle is equal to the month of life. Mortality








  • Dayboard

unnamedThis extension will replace your calendar, day planner with important tasks. Being accepted for the job, you add to the list of five cases scheduled for today, and see them every time you open a new tab. What can bring a greater sense of satisfaction than a tick, standing next to the task execution?








  • Be Limitless

AAEAAQAAAAAAAAKcAAAAJDE4NTQwYTdiLWQxMGMtNGEzZC1hYWI3LTJjYWJmOTE2MWNkZQBe Limitless also invites us to set goals: the first – for a day, the second – the long-term. But this expansion does not stop and analyze how much time you spend on various sites, palming you these statistics when you open the tab. In addition, there are more and quick notes.

Be Limitless







  • Random Quote

a-random-quote-visible-in-new-tab-of-Google-ChromeIf you prefer using motivating quotes, then you’ll love Random Quote. Extension shows you a beautiful quote famous people, furnished in a minimalist style. However, in English, but this fact also can be used as motivation for language learning.

Random Quote









  • Dream Afar

2015-04-30-dream-afarDream Afar is not motivates it’s inspires, which is also important. Every time you open a new tab, you will enjoy the scenery of one of the most beautiful corners of our planet, and perhaps even work harder to save up for a new journey. In addition to a beautiful picture on the screen and there was a place to find, watch, weather, history, bookmarks and applications.

Dream Afar







  • Google Art Project

17594137555_0590480ce5_cArtworks inspired by no less spectacular scenery. Thanks to Google and various galleries joint project around the world, you have the opportunity to admire paintings by famous artists in new tabs. For cases where you do not want to admire, there is a button with a list of frequently visited sites to quickly jump to the right.

Google Art Project







  • Delight

unnamed-2Pictures talented photographers – it’s great, but the video looks much more spectacular, agree. With Delight your surfing will begin with a fascinating taymlapsa some beautiful places. If you need the weather, bookmarks or apps – they are also close at hand.



Tagged , , , , , ,



(autonomous vehicles – part 1)


Taking the next step in its Blueprint for Mobility, Ford today – in conjunction with the University of Michigan and State Farm® – revealed a Ford Fusion Hybrid automated research vehicle that will be used to make progress on future automated driving and other advanced technologies.

Picture:URL:http://www.wired.com/wp-content/uploads/2015/11/IMG_6155-932×524.jpg or URL:http://www.wired.com/2015/11/ford-self-driving-car-plan-google/#slide-5

Taking the next step in its Blueprint for Mobility, Ford today – in conjunction with the University of Michigan and State Farm® – revealed a Ford Fusion Hybrid automated research vehicle that will be used to make progress on future automated driving and other advanced technologies.

Picture:URL:http://www.wired.com/wp-content/uploads/2015/11/IMG_6689-932×524.jpg or URL:http://www.wired.com/2015/11/ford-self-driving-car-plan-google/#slide-6


Articles: (from WIRED – http://www.wired.com/category/transportation/)

  • “Ford’s Skipping the Trickiest Thing About Self-Driving Cars”

Author: Alex Davies; Date of Publication: November 10, 2015


  • “Ford’s Testing Self-Driving Cars In a Tiny Fake Town”

Author: Alex Davies; Date of Publication: November 13, 2015


  • “A Google-Ford Self-Driving Car Project Makes Perfect Sense”

Author: Alex Davies; Date of Publication: December 22, 2015



  • “The Clever Way Ford’s Self-Driving Cars Navigate in Snow”

Author: Alex Davies; Date of Publication: January 11, 2016



  • “Google’s Self-Driving Cars Aren’t as Good as Humans—Yet”

Author: Alex Davies; Date of Publication: January 12, 2016



Nowadays, while focusing on the visions, strategies of companies operating within the automotive sector, it is possible to emphasize that more and more automobile manufacturers are deciding to develop specific technologies that could be implemented during the construction processes of the driverless, self-driving cars. After a deep analysis of the article “Ford’s Skipping the Trickiest Thing About Self-Driving Cars” it can be concluded that it is possible to distinguish two paths towards the automotive autonomy. Moreover, what has to be also emphasized here is the fact that the conventional – well known, “traditional” – automobile manufacturers are in the favor of a step-by-step approach, adding features one-by-one so humans cede control over time. Furthermore, it is also necessary to conclude that the group comprised of conventional automakers indicates that this approach gives them the opportunity to refine the technology, accustom consumers to the coming change, but also gives them the possibility to keep selling conventional cars in the meantime. Nevertheless, it is crucial to emphasize that Google perceives that as a complete nonsense and has decided to concentrate exclusively on fully autonomous vehicles that are not even equipped with a steering wheel. What is more, it is necessary to stress that Alex Davies, the author of articles concerning self-driving cars that are presented within this post, is of the opinion that Google – as well as Ford – sees no reason for the middle ground of semi-autonomy.

In the world of automotive engineering, automation can be categorized into six classifications, from Level 0 to Level 5. The lowest level has not been equipped with any autonomous technology. Nevertheless, each additional level adds progressively sophisticated technology up to Level 5, in which computers handle everything and the driver – passenger is strictly along for the ride.

The Ford Motor Company has not said much about its plans for the autonomous age, however, it is crucial to emphasize that the company is road-testing a fleet of self-driving Ford Fusion Hybrids in Dearborn, Michigan, and expects to expand beyond its hometown. The company’s special Fusions are loaded with cameras, radar, LIDAR, and real-time 3D mapping to see and navigate the world around them, which in this case includes concrete, asphalt, fake brick, and dirt. Furthermore, what is also important to present here is the information that Ford is the first automaker to test a fully autonomous car at Mcity, the little fake town built just for self-driving vehicles. Mcity, officially known as the University of Michigan’s Mobility Transformation Center, can be characterized as a 32-acre artificial metropolis intended for testing automated and connected vehicle technologies. The company aims to offer a fully-autonomous car in five years. What is more, according to Alex Davies, Ford decided to concentrate on fully-independent vehicles because it wants to avoid problem with semi-autonomous technology. It has to be also emphasized that the Ford Motor Company, like a vast majority of automakers, operates at Level 2 – its cars can be equipped with plenty of active safety systems like blind spot monitoring, parking assist, pedestrian detection, and adaptive cruise control, but the driver is always in charge. What is also necessary to present is the information that with Level 3 capability, the car can steer, maintain proper speed, and make decisions like when to change lanes, but always with the anticipation that the driver will take over if necessary. It is possible to conclude that Ford aims to focus directly on Level 4 – full autonomy, in which the car is capable of doing everything and human engagement is strictly optional. Moreover, it is also possible to conclude that Ford wants to skip Level 3 because it raises, contains the one of the greatest challenges with this technology: how to safely assign – shift – control from the computer to the driver, particularly in an emergency situations. The author of the article, Alex Davies, describes that as a balancing act, one that requires providing drivers with the advantage of autonomy – not having to pay attention – while assuring they are ready to take the wheel if the car confronts, encounters something it cannot handle.

What is also important here is the data presented by other automaker – German automobile manufacturer – Audi, which says its tests show it takes an average of 3 to 7 seconds, and as long as 10, for a driver to snap to attention and take control, even with flashing lights and verbal warnings. A lot can happen in that time (a car traveling 60 mph covers 88 feet per second) and automakers have different concepts for solving this issue. Audi has decided to implement an elegant, logical human machine interface. Moreover, Volvo, a Swedish premium automobile manufacturer, is creating its own HMI, and says it will accept full responsibility for its cars while using the autonomous mode.

Nevertheless, both Google and Ford are withdrawing from this problem. “Right now, there is no good answer, which is why we are kind of avoiding that space,” stresses Dr. Ken Washington, the automaker’s VP of research and advanced engineering. “We are really focused on completing the work to fully take the driver out of the loop.” Even though the Ford Motor Company has not uncovered much about its capacities – how many cars are used within the test fleet, or how much ground they have covered –, Washington is of the opinion that a fully autonomous car within five years is reasonable, if work on the software controlling it progresses well. What is crucial to emphasize here is the fact that Ford would limit the deployment process of their autonomous vehicle only to those regions where it will be able to provide the extremely detailed maps self-driving cars will require. Furthermore, what is also important to present is the information that, currently, the American multinational automaker is using its self-driving Fusion hybrids to make its own maps. What is also important to stress is the information that it remains to be seen whether that is achievable at a large scope, or if Ford will work with a company like TomTom or Here. Dr. Ken Washington, Ford’s VP of research and advanced engineering, admits that the company’s strategy is pretty similar to Google’s, however it bases, it is comprised of on two crucial differences. First, Ford already builds cars, and will continue developing and improving driver assistance features even as it works on Level 4 autonomy. Second, Ford has no project concerning selling wheeled pods in which people are simply along for the ride. “Drivers” will always have the opportunity to take the wheel. “We see a future where the choice should be yours,” Washington concludes.

Nevertheless the fact that the “self-driving cars age” has become inevitable, it is still possible to distinguish several problems to solve before the involved companies will enable the official deployment of these vehicles. It has to be stressed that one of the greatest challenges here is getting the robots to handle bad weather. Furthermore, what is also crucial to emphasize is the information that all the autonomous cars that are now in the phase of development use a variety of sensors to analyze the world around them. It is possible to conclude that the Radar and LIDAR devices perform most of the work – looking for other cars, pedestrians, and other obstacles –, while cameras typically read street signs and lane markers. Alex Davies, the author of the article “The Clever Way Ford’s Self-Driving Cars Navigate in Snow” stresses that during the bad weather conditions it would be rather impossible to scan the environment for those devices – “if snow is covering a sign or lane marker, there is no way for the car to see it”. Humans typically make their best guess, based on visible markers like curbs and other cars. Ford says it is teaching its autonomous cars to do something similar. As it was emphasized above, the Ford Motor Company, similarly to other players in this area, is creating high-fidelity 3D maps of the roads its autonomous cars will travel. What is more, what has to be also presented is the information that those maps contain specific data like the exact position of the curbs and lane lines, trees and signs, along with local speed limits and other relevant rules. It is possible to conclude that the more a car learns about a region, zone, the more it can concentrate its sensors and computing power on detecting temporary obstacles – people and other vehicles – in real time. Furthermore, it is also crucial to emphasize that those maps have another advantage: the car can implement them to figure out, within a centimeter, where it is at any given moment. Alex Davies gives the following example to illustrate this point: “The car can’t see the lane lines, but it can see a nearby stop sign, which is on the map. Its LIDAR scanner tells it exactly how far it is from the sign. Then, it’s a quick jump to knowing how far it is from the lane lines.” Moreover, Jim McBride – Ford’s head of autonomous research – is of the following opinion: “We’re able to drive perfectly well in snow, we see everything above the ground plane, which we match to our map, and our map contains the information about where all the lanes are and all the rules of the road.” It is also necessary to stress that the Ford Motor Company claims it tested this ability in real snow at Mcity. Nevertheless the fact that the idea of self-locating by deduction may not be exclusive to Ford, this automaker is the first one to publicly show it can use its maps to navigate on snow-covered roads. However, it has to be emphasized that the implementation of this technology has not yet solved all the problems with autonomous driving in bad weather. Falling rain and snow can interfere with LIDAR and cameras, and safely driving requires more than knowing where you are on a map – you also need to be able to see those temporary obstacles.


Picture:URL:http://www.wired.com/wp-content/uploads/2016/01/Snowtonomous_4693_Story-art.jpg or URL:http://www.wired.com/2016/01/the-clever-way-fords-self-driving-cars-navigate-in-snow/

(What has to be also emphasized is the information that) According to the article “A Google-Ford Self-Driving Car Project Makes Perfect Sense” (Alex Davies; date of publication: December 22, 2015) as well as Yahoo! Autos Report, Ford and Google plan to create a joint venture to work on self-driving cars. Furthermore, it has to be stressed that the setup would use Google’s very sophisticated autonomous software in Ford cars, playing to each company’s strength – Google’s fleet of self-driving cars has logged more than 1.2-million miles in the past few years, and covers 10,000 more each week, while Ford makes and sells millions of cars each year. “If it is true, it makes perfect sense,” Davies emphasizes. What is more, Alex Davies is of the opinion that it is reasonable that Google would want to cooperate with an established – experienced – automaker, because the company has never needed to think about the tens of thousands of parts that must come together following incredibly strict federal guidelines, but also about the processes that require huge plants as well as specific competencies. It is possible to conclude that Ford has been doing all that for a century, so it knows a lot that Google does not. Moreover, it is important to stress that Ford has started to talk publicly about its autonomous driving research two years ago, including its interest in finding new partners. It is also necessary to emphasize that Mark Fields, the CEO of the Ford Motor Company, said that the company is actively looking to work with startups and bigger companies, and that that work is a priority for him. What is also crucial to stress here is the fact that this cooperation would make sense, because – as it was presented above – both Google’s and Ford’s approaches to autonomous driving are remarkably similar. The vast majority of automakers plans to deploy self-driving technologies progressively, adding features one-by-one so drivers cede control over time. However, what is also crucial to emphasize is the fact that Google decided to develop, to construct a car with no steering wheel, no pedals, and no role for the human other than sitting still and behaving while the car does the driving. Automobile manufacturers like Mercedes, Audi, GM, and Tesla plan to offer features that let the car do the driving some of the time, using the human as backup within the emergency situations. Nevertheless, due to the fact that this “level” of autonomous driving covers the issue of transferring safely the control between robot and human – particularly during the dangerous situations –, Google as well as Ford have decided to avoid that part. Moreover, Alex Davies is of the opinion that it is very unlikely that the Ford Motor Company will be contented with providing nothing but wheels, motors, and seats, while Google does all the relevant work. What is more, Bill Ford, the executive chairman and former CEO of the Ford Motor Company, stressed that the thing he does not want to witness is Ford reduced to the role of a hardware subcontractor for companies doing the more creative, innovative work. Furthermore, Dr. Ken Washington, Ford’s VP of research and advanced engineering, admitted that he wants the automaker to build its own technology. “We think that’s a job for Ford to do.

However, what is also necessary to present – while focusing on the issue of cooperation between Ford and Google – is the information that “Google’s Self-Driving Cars Aren’t as Good as Humans (Yet)”. Google has recently announced that its engineers assumed control of an autonomous vehicle 341 times between September 2014, and November 2015. That may sound like a lot, however it has to be presented that Google’s autonomous fleet covered 423,000 miles in that time. Furthermore, it has to be stressed that Google’s cars have never been at-fault in a crash, and Google’s data shows a meaningful drop in “driver disengagements” over the past year. Moreover, what is also crucial to stress, while focusing on the Google’s cars reliability, is the information that Google’s rate of disengagements is also far lower than those declared by other companies testing autonomous technology in California, including Nissan, VW, and Mercedes-Benz. It is crucial to emphasize that of the 341 instances where Google engineers took the wheel, 272 derived from the “overall stability of the autonomous driving system” – things like communication and system failures. What is also significant to present here is the fact that Chris Urmson, the project’s technical lead, does not find this issue very troubling, because – as he states – “hardware is not where Google is focusing its energy right now”. Moreover, it is also possible to conclude the Google team is more concerned with how the car makes decisions, and will make the software and hardware more robust before entering the market. However, while focusing on the remaining 69 takeovers it is necessary to emphasize that they concern more important issues – it is possible to conclude that they are “related to safe operation of the vehicle,” meaning those times when the autonomous car might have made a bad decision. It is also important to stress that due to the Google’s simulator program it is impossible to read openly any reports regarding these incidents – if the engineer in the car is not fully convinced that the AI will perform the appropriate action, she will take control of the vehicle, later, back at headquarters (Mountain View), she will transmit all of the car’s data into a computer and the team will see what the car would have done had she not taken the wheel. According to data Google recently has shared with the California Department of Motor Vehicles (DMV), 13 of those 69 incidents would have led to crashes. What has to be also stressed is the information that Google’s cars have driven 1.3 million miles since 2009. It is possible to conclude that they can identify hand signals from traffic officers and “think” at speeds no human can match. Nevertheless, it is crucial to emphasize that the Google cars have been involved in 17 crashes, but have never been at fault. Moreover, it has to be presented that Google had previously predicted the vehicles will be road-ready by 2020. At this point, the team usually is not able to solve problems with a rapid adjustment to the code. It has to be stressed that currently the challenges are far more sophisticated as well as complicated. Chris Urmson presents the following example: “In a recent case the car was on a one-lane road about to make a left, when it decided to turn tight instead – just as another car was using the bike lane to pass it on the right.”, “Our car was anticipating that since the other car was in the bike lane, he was going to make a right turn.”, “The Google car was ahead and had the right of way, so it was about to make the turn. Had the human not taken over, there would have been contact.” It is possible to conclude that avoiding repeating so fringe case is not easy, but Google informs that its simulator program “executes dozens of variations on situations the team has encountered in the real world,” which facilitates them test how the car would have react under slightly diverse circumstances. Nevertheless, it is crucial to emphasize that Google is getting better. It has to be stressed that the disengagement numbers have dropped over the past year. Eight of the 13 crash-likely incidents took place in the last three months of 2014, over the course of 53,000 miles. The other five occurred in the following 11 months and 370,000 miles. Assuming those 13 incidents would have ended with a crash that stands for one accident every 74,000 miles. “Good, but not as good as humans,” Urmson concludes. It has to be also presented that according to new data from the Virginia Tech Transportation Institute, Americans log one crash per 238,000 miles. It is possible to conclude that before bringing its technology to the market Google must make its cars safer than human drivers (who cause more than 90 percent of the crashes that kill more than 30,000 people in the US every year). “You need to be very thoughtful in doing this, but you don’t want the perfect to be the enemy of the good.” “We need to make sure we can get that out in the world in a timely fashion,” Urmson stresses. What is crucial to emphasize is the information that the Google’s disengagement numbers must keep dropping. The downward trend will maintain, Urmson says, but as the team begins testing in tougher conditions, like bad weather and busier urban areas, it will be possible to notice sporadic upticks. “As we push the car into more complicated situations, we would expect, naturally, to have it fail,” Urmson says. “But overall, the number should go down.

I would like to stress that the broadly defined automotive autonomy is to be one of the most interesting topics of the nearest future. In my opinion, both of the paths towards achieving the Level 4 as well as Level 5 of automation are fulfilled with sophisticated solutions and processes. I have to admit that the second path – the path selected by the two companies that have decided to avoid the problem of semi-autonomous technology, Google and Ford, is even more fascinating. I would also like to emphasize that even if the cooperation between those two companies will not be confirmed, both Google and Ford will be within the group comprised of the most powerful automobile manufacturers that will compete in the race for full automation, and within five years’ time we will have the opportunity to test how it is to use our vehicles without the necessity to drive. Furthermore, it is also crucial to remember about the two of the main elements enabling self-driving cars to ride – about the projects regarding the high-fidelity 3D maps that together with Radar and Lidar devices will be deployed in the autonomous vehicles. I believe that the systems composed of Lidar scanner and 3D maps will provide us the most accurate artificial drivers in terms of broadly defined safety. All things considered, I would like to emphasize that even though the age of autonomous vehicles is on its way, it is still possible to distinguish several problems to solve before we will be only acting – voluntarily – as passengers (behavior of the system at the high speeds, issue of trust, bad road conditions, and emergency situations).


Picture:URL:http://www.wired.com/wp-content/uploads/2015/09/Screen-Shot-2014-12-22-at-2.10.41-PM.jpg or URL:http://www.wired.com/2016/01/google-autonomous-vehicles-human-intervention/



Tagged , , , ,