Author Archives: m_zakrzewski

MAVEN – GM’s (DRIVERLESS) CAR-SHARING PROGRAM

Reading Time: 5 minutes

MAVEN, GM’s CAR-SHARING SCHEME, IS REALLY ABOUT A DRIVERLESS FUTURE

(autonomous vehicles – part 4)

#AUTONOMOUS VEHICLES, #GENERAL MOTORS, #LYFT, #RIDE SHARING, #UBER

GM2

Picture:URL:http://www.wired.com/wp-content/uploads/2016/01/GM_Spark_STORY-ART1.jpg or URL:http://www.wired.com/2016/01/maven-gms-car-sharing-scheme-is-really-about-a-driverless-future/

 

Article: (from WIRED – http://www.wired.com/category/transportation/)

  • “Maven, GM’s Car-Sharing Scheme, Is Really About a Driverless Future”

Author: Alex Davies; Date of Publication: January 21, 2016

URL:http://www.wired.com/2016/01/maven-gms-car-sharing-scheme-is-really-about-a-driverless-future/

 

General Motors is launching a car-sharing program. It is called Maven, and it is to be accessible in exactly one city. What is more, nevertheless the fact that the concept is similar to the approach implemented by ZipCar, a car sharing company, it has to be stressed that GM does not actually want to be perceived as a contender. It is possible to conclude that General Motors – within this project – is concentrating solely on the future. However, it is also crucial to emphasize that the Detroit automaker has presented even more ambitious projects this month. The company introduced the Chevy Bolt, the first mass-market electric car. Moreover, it acquired the remains of Sidecar, the defunct Uber competitor, to dismantle it for parts. Furthermore, what was also presented in a separate post, the company is cooperating with Mobileye to develop maps for robo-cars. What is more, it is crucial to emphasize that GM is also working with Lyft with an aim of building a network of driverless vehicles. Due to the fact that Maven is available only in Ann Arbor, Michigan, it is likely to consider this project as a strange step. However, it has to be also emphasized that GM intends to deploy the Maven service within other cities soon – and it will not offer the convenience of Uber or Lyft. “Oh sure, there’s money to be made in car-sharing, as ZipCar and others have shown. But it’s small potatoes for a company like GM. It isn’t until you take the long view that this move makes sense,” Davies stresses. It is possible to conclude that Maven can be the base for the self-driving car network General Motors wants to create. It has to be stressed that the typical “owner-driver” concept that has been the most essential part of the auto industry for a century will not be vanishing anytime soon, however, the industry is in the middle of fundamental, revolutionary transition. What is necessary to stress is the information that GM claims some 5 million people worldwide use vehicle sharing services like Uber, and that number is anticipated to reach 25 million by 2020. Furthermore, it is necessary to emphasize that the emerging car sharing industry will be entirely remade by autonomous vehicles, and GM is trying to position itself for that change now. “We feel that we are very well-positioned as a company to be at the very forefront of this change in ownership model, change in mobility, particularly in the urban environment,” says GM President Dan Ammann. Moreover, it is possible to conclude that Maven can be characterized as a key to how GM is addressing that shift. It is important to emphasize that, in the beginning, the service is to be accessible to students and faculty at the University of Michigan’s Ann Arbor school grounds. Moreover, while concentrating on the number of available Maven vehicles, it has to be stressed that Chevrolet vehicles – Volts, Sparks, Malibus, and Tahoes – will fill 21 parking spots. It is also necessary to stress that users will have the opportunity to reserve cars via the Maven app, as well as use their phones to unlock and start the vehicles. What is more, the cars will support Android Auto and Apple CarPlay, what will allow users to carry their digital lives with them into whatever car they rent. “It’s a truly personalized experience. You can take your life with you,” says Julia Steyn, GM’s head of urban mobility programs. It has to be stressed that the program is complimentary and charges as little as $6 an hour to use a car, which includes insurance and gas. Furthermore, what is also crucial to emphasize is the information that GM is not the only automobile manufacturer exploring the new business models. It has to be stressed that Ford, among other undertakings, is testing peer-to-peer car sharing for its employees, and moreover, it will soon let up to six people jointly lease its cars. What is more, BMW conducted a vehicle sharing program in the Bay Area until November of last year, and Daimler’s Car2Go service works in many cities throughout the US and Europe. Finally, in November, Audi deployed a premium car sharing service in San Francisco and Miami. It is necessary to emphasize that – in the next few months – GM aims to extend Maven to other municipalities. Furthermore, its actual mobility programs, including car sharing services in New York City and Germany, will be enclosed into the new program. It is possible to conclude that Maven will offer services adjusted to users in cities, on campuses, and residences like co-ops. “We see it as a real commercial opportunity,” Ammann says. It is necessary to conclude that it is possible to distinguish many opportunities concerning the expansion of this new program: geographically, with more cars, with more types of cars, by trying new pricing schemes, and by bringing in new customers. However, for the moment, all that is distinct from GM’s autonomous vehicle research and its partnership with Lyft. GM wants to launch a ride-hailing network using driverless vehicles. It has to be emphasized that Maven can be the groundwork for that type of service, with ready-made groups of users, accurately organized by geography as well as by how they like to travel, already using a GM service. “We’re putting in place all the building blocks,” Ammann says. “This is all part of a very comprehensive approach.

In my opinion, it is crucial to emphasize that due to the autonomous vehicles’ development processes, the automotive industry – and thus its most essential part; the classic “owner-driver” model – is in the middle of fundamental, revolutionary transition. Moreover, it is necessary to stress that according to the General Motors Company nearly 5 million people worldwide use vehicle sharing services like Uber, and that number is anticipated to reach 25 million by 2020. Furthermore, it is necessary to stress that even though the car sharing programs are estimated to serve more than 20 million of people globally by 2020, this emerging car sharing industry will be also entirely remade with the introduction of autonomous vehicles. It is possible to conclude that, due to the current GM’s strategy toward Maven car sharing platform – therefore obtaining ready-made groups of users, accurately organized by geography as well as by how they like to travel, already using a GM service –, this service can be a perfect base for autonomous car sharing network. All things considered, I believe that the project of driverless car sharing services will have a significant impact on the broadly defined issue of travelling. In my opinion, autonomous car sharing companies will also have in their offers the opportunity to use their cars in a similar manner to bus services (specific routes and stops – in order to use while going to work or shopping), and moreover, the system will be basing on an Internet platform, where people will basically buy their “tickets”. However, while focusing solely on the advantages of autonomous vehicles, it will be possible to purchase or rent and plan even the longest journeys, not thinking about sitting in the “driver’s seat”.

Nevertheless, it is important to take into consideration the fact that some of benefits provided by autonomous vehicles may be limited by specific driverless cars’ regulations.

MZ

Tagged , , , , ,

GENERAL MOTORS AND CAMERAS ON CUSTOMER CARS

Reading Time: 3 minutes

GM IS USING CAMERAS ON CUSTOMER CARS TO BUILD SELF-DRIVING CAR MAPS

(autonomous vehicles – part 3)

#AUTONOMOUS VEHICLES, #GENERAL MOTORS

Web

Picture:URL:http://www.wired.com/wp-content/uploads/2016/01/GM_Mobileye_R5_8.10-094.jpg or URL:http://www.wired.com/2016/01/gms-building-self-driving-car-maps-with-cameras-on-customer-cars/

 

Article: (from WIRED – http://www.wired.com/category/transportation/)

  • “GM’s Using Cameras on Customer Cars to Build Self-Driving Car Maps

Author: Alex Davies; Date of Publication: January 5, 2016

URL:http://www.wired.com/2016/01/gms-building-self-driving-car-maps-with-cameras-on-customer-cars/

 

General Motors is planning to use cameras on customer cars in order to create as well as develop the maps that will help self-driving cars navigate. What is crucial to stress is the information that the American automobile manufacturer is analyzing – scrutinizing – new technology from Mobileye, an Israeli provider of visual processing chips and software. It has to be emphasized that the technology invented and supplied by the Mobileye Company is able to identify vehicles, pedestrians, and other obstacles, as well as road markings, signs, and traffic lights. Furthermore, it is necessary to stress that the Mobileye technology powers favorite features like lane departure warnings, and is introduced into hundreds of thousands of GM cars. What is also important to emphasize is the fact that GM’s concept bases on pulling that camera data – exploiting its OnStar system – from client vehicles to design exceptionally accurate, constantly updated road maps. Moreover, it is crucial to stress that those maps would allow a driverless car to know its position within about 10 centimeters. It has to be emphasized that it is a great advantage over contemporary GPS systems, which estimate location counting the margins of error in meters – “Good enough for knowing what street you’re on, but not for navigating a robo-car through traffic,” Davies stresses. It is possible to conclude that mapping processes compose an increasingly critical, valuable part of the pursuit towards the automotive autonomy. The more data regarding specific area a vehicle receives, the more it can concentrate its sensors and computing power on temporary obstacles like cars, pedestrians, and cyclists. “Creating and updating maps using on-board camera technology supplies the missing link between on-board sensing and the requirement for full redundancy to enable safe autonomous driving,” says Amnon Shashua, Mobileye’s co-founder and CTO. Furthermore, Alex Davies concludes: “That’s why the consortium of BMW, Audi, and Mercedes recently bought Nokia’s mapping arm, HERE, for $3.1 billion. It’s why TomTom is still relevant. It’s why Google has a fleet of cars loaded with sensors scoping out all the roads its autonomous cars will later traverse.” It has to be emphasized that as the maps General Motors aims to create will depend on visual data, they are unlikely to be as comprehensive as well as precise as those that HERE, TomTom, and Google are developing – as those which are based principally on LIDAR data. Nevertheless, it is possible to conclude that GM possesses an instantaneous advantage by using technology that goes into its cars anyway – scale. “GM is committed to bringing semi-autonomous and fully autonomous vehicles to our customers, and this technology will be a critical enabler to getting us there,” says Mark Reuss, GM’s head of product. What is crucial to stress here is the fact that, currently, GM is testing the technology on five cars; the second phase would extend that up to 30. If that testing goes well, a company representative stresses: “this could move quickly,” concluding that the company could deploy the technology into its new vehicles later this year.

I would like to stress that even though the constantly updated maps General Motors aims to develop will depend on visual data and they are unlikely to be as precise as those which are based principally on LIDAR data, GM, while pulling the camera data, will base on its OnStar service (a personal onboard assistant) therefore it will possess an instantaneous advantage by exploiting technology that can be activated within its cars anyway. All things considered, it is necessary to emphasize that the mapping processes compose an increasingly critical, valuable part of the pursuit towards automotive autonomy.

MZ

Tagged , , ,

AUTONOMOUS RACING CARS – SAFER ROADS FOR EVERYONE

Reading Time: 6 minutes

RACING SELF-DRIVING CARS WILL MAKE ROADS SAFER FOR EVERYONE

(autonomous vehicles – part 2)

#AUTONOMOUS VEHICLES, #MOTORSPORTS

rc1

Picture:URL:http://www.wired.com/wp-content/uploads/2015/12/G7C5023.jpg or URL:http://www.wired.com/2015/12/roborace-autonomous-vehicle-racing/

 

Articles:

  • “Racing Self-Driving Cars Will Make Roads Safer for Everyone”

Author: Alex Davies; Date of Publication: December 2, 2015

URL:http://www.wired.com/2015/12/roborace-autonomous-vehicle-racing/ 

  • “Formula E & Kinetik Announce Driverless Support Series”

Author: Formula E; Date of Publication: November 27, 2015

URL:http://www.fiaformulae.com/en/news/2015/november/formula-e-kinetik-announce-roborace-a-global-driverless-championship.aspx

 

In motorsports, Formula One has long been recognized as the Queen, as the sport where the most advanced, sophisticated technology is rapidly as well as viciously developed, but also brutally tested. However, it is necessary to stress that this perception is agreed to shift with a racing series aiming to definitely eliminate the most obsolete, but still the most important component of an F1 car – the driver. Formula E, the all-electric racing series currently in its second season, is launching “Roborace,” an international motorsports series destined specifically for autonomous vehicles. According to Alex Davies, the author of article “Racing Self-Driving Cars Will Make Roads Safer For Everyone”, this unique championship will provide a competitive platform for the autonomous driving solutions that are now being developed by many large industrial automotive and technology players as well as top tech universities. It is possible to conclude that the Roborace Series, initiated in partnership with automaker Kinetik, encourages as well as indicates to be more than an absolutely magnificent demonstration of what the technology is able do when humans get out of the way. Furthermore, it has to be emphasized that developing autonomous racing vehicles (still “single-seaters”?) that race each another around difficult, sophisticated circuits at nearly 200 mph could provide essential information concerning how such technology could be deployed, implemented in our everyday lifes. What is more, it is necessary to stress that Roborace Series are to form part of the support package of the FIA Formula E Championship, with the first race expected to take place during the 2016-2017 season. Moreover, what has to be emphasized is the fact that Roborace is planned to race prior to each Formula E race, using the same circuits in major cities across the world. Ten teams – including a “crowd-sourced community team,” open for passionate software and technology specialists throughout the world –, and moreover, each with two driverless cars, will face one-hour races over the full championship season. It also crucial to stress that nevertheless the fact that all the teams will have the same cars, they are to contend using real-time computing algorithms and AI technologies. The cars will be electric, however, it has to be emphasized that the event organizers admit they will be nearly as fast as the single-seaters competing in Formula One. Moreover, nevertheless the fact that Denis Sverdlov, CEO of Kinetik Company, promises “really crazy speeds” up to 186 mph, specific limits almost certainly will be imposed for racing. What has to be also stressed is the information that the cars may look completely different from the current, traditional race cars, as there is no need for a human inside. It is possible to conclude that the mission of Roborace is to determine that the future of automotive and information technology is already available and can even work in the most extreme conditions. Denis Sverdlov is of the following opinion: “We passionately believe that, in the future, all of the world’s vehicles will be assisted by AI and powered by electricity, thus improving the environment and road safety.” “It’s a global platform to show that robotic technologies and AI can co-exist with us in real life. Thus, anyone who is at the edge of this transformation now has a platform to show the advantages of their driverless solutions and this shall push the development of the technology.” Furthermore, “Roborace is an open challenge to the most innovative scientific and technology-focused companies in the world. It is very exciting to create a platform for them to showcase what they are capable of and I believe there is great potential for us to unearth the next big idea through the unique crowd-sourced contest,” emphasizes Alejandro Agag – CEO of Formula E.

However, what is even more exciting than the concept of robots racing is the issue of how training, teaching those vehicles to race may develop the systems destined for consumer vehicles. “There are certain problems you have to solve at these high speeds that could improve performance at low speeds,” says John Dolan, who studies autonomous technology at Carnegie Mellon’s Robotics Institute. What is crucial to emphasize here is the problem to reduce latency – the time it takes the computer to process the data coming from a sensor and transmit instructions to various systems. “At 180 mph, you’re gonna have to do that faster,” Dolan stresses. It is possible to conclude that reducing that time, which is mostly a software issue, in racing develops a more efficient as well as effective system in the automobiles we will have the opportunity to use. On the other hand, it is also necessary to stress that creating an “everyday” car that can handle racing dynamics may also have a significant impact on the broadly defined behavior of autonomous cars. It has to be emphasized that – two years ago – Audi’s self-driving RS7 lapped Germany’s Hockenheimring F1 track, hitting all 17 turns with precision and reaching maximum speed of 149 mph. What is more, Audi also decided to send an autonomous TTS racing up the 156-turn Pikes Peak mountain circuit in 2010, then around California’s Thunderhill Race Track in 2012. Furthermore, it is also crucial to present here the information that Stanford University researchers demonstrated an autonomous DeLorean they instructed to drift and do donuts. It is possible to conclude that both of the projects were focused on comprehending how driverless cars behave at the limit of traction and grip, and implementing that learnings to technology destined for consumers. Moreover, Alex Davies is of the opinion that by developing cars that can reach triple-digit speeds while racing on the very difficult street circuits of the Formula E calendar, the Roborace teams will inevitably be developing systems that can be deployed to consumer cars. However, it is crucial to stress that the Roborace vehicles will have to confront a challenge the Audi and Stanford cars did not have to deal with: competition. The Roborace cars will be racing, and the only way to come in first – if you do not start from the pole position and hold the lead – is to pass the car – the robot – ahead of you. It is important to emphasize that for a human, whether on the racetrack or a two-lane country road, passing is considered to be a problematic maneuver – it is necessary to pick the perfect moment, the right direction, the proper steering angle and degree of acceleration; all while balancing the risk of crashing with the reward of moving ahead. Finally, it is possible to conclude that the capability to make that kind of complex decision in near real time is key to safely handling all sorts of everyday driving situations.

I would like to emphasize that even though some of the most crucial details concerning the Roborace Project have already been presented, there is still no information regarding the potential participants. Nevertheless the fact that it is impossible to confirm that the most deeply involved in automotive autonomy concept companies (Ford, Google) will also concentrate on creating self-driving racing cars – developing “racing” algorithms and AI “racing” technologies –, I believe that the implementation of this great, unique racing series will have a significant impact on the autonomous driving solutions, technologies that are now being developed by many large industrial automotive and technology players – on the systems destined for consumer vehicles. I am convinced that developing autonomous racing vehicles that can race each another around difficult Formula E street circuits – at nearly 200 mph – will provide essential information concerning how such technology could be implemented in our daily lifes. What is crucial to emphasize here is the issue of reducing latency – the time it takes the computer to process the data coming from a sensor and transmit instructions to various systems. It is possible to conclude that reducing that time, which is mostly a software issue, in racing develops a more efficient as well as effective system in the automobiles we will have the opportunity to use. However, it is also crucial to stress that the Roborace vehicles will have to confront a challenge concerning overtaking other vehicles. In my opinion, the capability to perform that kind of difficult maneuver in near real time is key to safely handling all sorts of everyday driving situations. It is also important to remember about the idea of creating crowd-sourced community racing team. I would like to stress that this is a great idea since there are many independent talents in the world that might contribute to this initiative. All things considered, the Roborace teams will inevitably be developing systems that can be deployed to consumer cars.

rc2

Picture:URL:http://www.wired.com/wpcontent/uploads/2015/12/JAGUAR_FORMULA_E_02-932×524.jpg or URL:http://www.wired.com/2015/12/jaguars-joining-formula-e-electric-racing-but-its-not-just-about-the-races/#slide-3

MZ

Tagged ,

AUTONOMOUS VEHICLES

Reading Time: 13 minutes

AUTONOMOUS VEHICLES

(autonomous vehicles – part 1)

#AUTONOMOUS CARS, #SELF-DRIVING CARS, #INNOVATIONS, #FORD, #GOOGLE

Taking the next step in its Blueprint for Mobility, Ford today – in conjunction with the University of Michigan and State Farm® – revealed a Ford Fusion Hybrid automated research vehicle that will be used to make progress on future automated driving and other advanced technologies.

Picture:URL:http://www.wired.com/wp-content/uploads/2015/11/IMG_6155-932×524.jpg or URL:http://www.wired.com/2015/11/ford-self-driving-car-plan-google/#slide-5

Taking the next step in its Blueprint for Mobility, Ford today – in conjunction with the University of Michigan and State Farm® – revealed a Ford Fusion Hybrid automated research vehicle that will be used to make progress on future automated driving and other advanced technologies.

Picture:URL:http://www.wired.com/wp-content/uploads/2015/11/IMG_6689-932×524.jpg or URL:http://www.wired.com/2015/11/ford-self-driving-car-plan-google/#slide-6

 

Articles: (from WIRED – http://www.wired.com/category/transportation/)

  • “Ford’s Skipping the Trickiest Thing About Self-Driving Cars”

Author: Alex Davies; Date of Publication: November 10, 2015

URL:http://www.wired.com/2015/11/ford-self-driving-car-plan-google/#slide-1

  • “Ford’s Testing Self-Driving Cars In a Tiny Fake Town”

Author: Alex Davies; Date of Publication: November 13, 2015

URL:http://www.wired.com/2015/11/fords-testing-self-driving-cars-mcity/

  • “A Google-Ford Self-Driving Car Project Makes Perfect Sense”

Author: Alex Davies; Date of Publication: December 22, 2015

URL:http://www.wired.com/2015/12/a-google-ford-self-driving-car-project-makes-perfect-

sense/

  • “The Clever Way Ford’s Self-Driving Cars Navigate in Snow”

Author: Alex Davies; Date of Publication: January 11, 2016

URL:http://www.wired.com/2016/01/the-clever-way-fords-self-driving-cars-navigate-in-

snow/

  • “Google’s Self-Driving Cars Aren’t as Good as Humans—Yet”

Author: Alex Davies; Date of Publication: January 12, 2016

URL:http://www.wired.com/2016/01/google-autonomous-vehicles-human-intervention/

 

Nowadays, while focusing on the visions, strategies of companies operating within the automotive sector, it is possible to emphasize that more and more automobile manufacturers are deciding to develop specific technologies that could be implemented during the construction processes of the driverless, self-driving cars. After a deep analysis of the article “Ford’s Skipping the Trickiest Thing About Self-Driving Cars” it can be concluded that it is possible to distinguish two paths towards the automotive autonomy. Moreover, what has to be also emphasized here is the fact that the conventional – well known, “traditional” – automobile manufacturers are in the favor of a step-by-step approach, adding features one-by-one so humans cede control over time. Furthermore, it is also necessary to conclude that the group comprised of conventional automakers indicates that this approach gives them the opportunity to refine the technology, accustom consumers to the coming change, but also gives them the possibility to keep selling conventional cars in the meantime. Nevertheless, it is crucial to emphasize that Google perceives that as a complete nonsense and has decided to concentrate exclusively on fully autonomous vehicles that are not even equipped with a steering wheel. What is more, it is necessary to stress that Alex Davies, the author of articles concerning self-driving cars that are presented within this post, is of the opinion that Google – as well as Ford – sees no reason for the middle ground of semi-autonomy.

In the world of automotive engineering, automation can be categorized into six classifications, from Level 0 to Level 5. The lowest level has not been equipped with any autonomous technology. Nevertheless, each additional level adds progressively sophisticated technology up to Level 5, in which computers handle everything and the driver – passenger is strictly along for the ride.

The Ford Motor Company has not said much about its plans for the autonomous age, however, it is crucial to emphasize that the company is road-testing a fleet of self-driving Ford Fusion Hybrids in Dearborn, Michigan, and expects to expand beyond its hometown. The company’s special Fusions are loaded with cameras, radar, LIDAR, and real-time 3D mapping to see and navigate the world around them, which in this case includes concrete, asphalt, fake brick, and dirt. Furthermore, what is also important to present here is the information that Ford is the first automaker to test a fully autonomous car at Mcity, the little fake town built just for self-driving vehicles. Mcity, officially known as the University of Michigan’s Mobility Transformation Center, can be characterized as a 32-acre artificial metropolis intended for testing automated and connected vehicle technologies. The company aims to offer a fully-autonomous car in five years. What is more, according to Alex Davies, Ford decided to concentrate on fully-independent vehicles because it wants to avoid problem with semi-autonomous technology. It has to be also emphasized that the Ford Motor Company, like a vast majority of automakers, operates at Level 2 – its cars can be equipped with plenty of active safety systems like blind spot monitoring, parking assist, pedestrian detection, and adaptive cruise control, but the driver is always in charge. What is also necessary to present is the information that with Level 3 capability, the car can steer, maintain proper speed, and make decisions like when to change lanes, but always with the anticipation that the driver will take over if necessary. It is possible to conclude that Ford aims to focus directly on Level 4 – full autonomy, in which the car is capable of doing everything and human engagement is strictly optional. Moreover, it is also possible to conclude that Ford wants to skip Level 3 because it raises, contains the one of the greatest challenges with this technology: how to safely assign – shift – control from the computer to the driver, particularly in an emergency situations. The author of the article, Alex Davies, describes that as a balancing act, one that requires providing drivers with the advantage of autonomy – not having to pay attention – while assuring they are ready to take the wheel if the car confronts, encounters something it cannot handle.

What is also important here is the data presented by other automaker – German automobile manufacturer – Audi, which says its tests show it takes an average of 3 to 7 seconds, and as long as 10, for a driver to snap to attention and take control, even with flashing lights and verbal warnings. A lot can happen in that time (a car traveling 60 mph covers 88 feet per second) and automakers have different concepts for solving this issue. Audi has decided to implement an elegant, logical human machine interface. Moreover, Volvo, a Swedish premium automobile manufacturer, is creating its own HMI, and says it will accept full responsibility for its cars while using the autonomous mode.

Nevertheless, both Google and Ford are withdrawing from this problem. “Right now, there is no good answer, which is why we are kind of avoiding that space,” stresses Dr. Ken Washington, the automaker’s VP of research and advanced engineering. “We are really focused on completing the work to fully take the driver out of the loop.” Even though the Ford Motor Company has not uncovered much about its capacities – how many cars are used within the test fleet, or how much ground they have covered –, Washington is of the opinion that a fully autonomous car within five years is reasonable, if work on the software controlling it progresses well. What is crucial to emphasize here is the fact that Ford would limit the deployment process of their autonomous vehicle only to those regions where it will be able to provide the extremely detailed maps self-driving cars will require. Furthermore, what is also important to present is the information that, currently, the American multinational automaker is using its self-driving Fusion hybrids to make its own maps. What is also important to stress is the information that it remains to be seen whether that is achievable at a large scope, or if Ford will work with a company like TomTom or Here. Dr. Ken Washington, Ford’s VP of research and advanced engineering, admits that the company’s strategy is pretty similar to Google’s, however it bases, it is comprised of on two crucial differences. First, Ford already builds cars, and will continue developing and improving driver assistance features even as it works on Level 4 autonomy. Second, Ford has no project concerning selling wheeled pods in which people are simply along for the ride. “Drivers” will always have the opportunity to take the wheel. “We see a future where the choice should be yours,” Washington concludes.

Nevertheless the fact that the “self-driving cars age” has become inevitable, it is still possible to distinguish several problems to solve before the involved companies will enable the official deployment of these vehicles. It has to be stressed that one of the greatest challenges here is getting the robots to handle bad weather. Furthermore, what is also crucial to emphasize is the information that all the autonomous cars that are now in the phase of development use a variety of sensors to analyze the world around them. It is possible to conclude that the Radar and LIDAR devices perform most of the work – looking for other cars, pedestrians, and other obstacles –, while cameras typically read street signs and lane markers. Alex Davies, the author of the article “The Clever Way Ford’s Self-Driving Cars Navigate in Snow” stresses that during the bad weather conditions it would be rather impossible to scan the environment for those devices – “if snow is covering a sign or lane marker, there is no way for the car to see it”. Humans typically make their best guess, based on visible markers like curbs and other cars. Ford says it is teaching its autonomous cars to do something similar. As it was emphasized above, the Ford Motor Company, similarly to other players in this area, is creating high-fidelity 3D maps of the roads its autonomous cars will travel. What is more, what has to be also presented is the information that those maps contain specific data like the exact position of the curbs and lane lines, trees and signs, along with local speed limits and other relevant rules. It is possible to conclude that the more a car learns about a region, zone, the more it can concentrate its sensors and computing power on detecting temporary obstacles – people and other vehicles – in real time. Furthermore, it is also crucial to emphasize that those maps have another advantage: the car can implement them to figure out, within a centimeter, where it is at any given moment. Alex Davies gives the following example to illustrate this point: “The car can’t see the lane lines, but it can see a nearby stop sign, which is on the map. Its LIDAR scanner tells it exactly how far it is from the sign. Then, it’s a quick jump to knowing how far it is from the lane lines.” Moreover, Jim McBride – Ford’s head of autonomous research – is of the following opinion: “We’re able to drive perfectly well in snow, we see everything above the ground plane, which we match to our map, and our map contains the information about where all the lanes are and all the rules of the road.” It is also necessary to stress that the Ford Motor Company claims it tested this ability in real snow at Mcity. Nevertheless the fact that the idea of self-locating by deduction may not be exclusive to Ford, this automaker is the first one to publicly show it can use its maps to navigate on snow-covered roads. However, it has to be emphasized that the implementation of this technology has not yet solved all the problems with autonomous driving in bad weather. Falling rain and snow can interfere with LIDAR and cameras, and safely driving requires more than knowing where you are on a map – you also need to be able to see those temporary obstacles.

ford3

Picture:URL:http://www.wired.com/wp-content/uploads/2016/01/Snowtonomous_4693_Story-art.jpg or URL:http://www.wired.com/2016/01/the-clever-way-fords-self-driving-cars-navigate-in-snow/

(What has to be also emphasized is the information that) According to the article “A Google-Ford Self-Driving Car Project Makes Perfect Sense” (Alex Davies; date of publication: December 22, 2015) as well as Yahoo! Autos Report, Ford and Google plan to create a joint venture to work on self-driving cars. Furthermore, it has to be stressed that the setup would use Google’s very sophisticated autonomous software in Ford cars, playing to each company’s strength – Google’s fleet of self-driving cars has logged more than 1.2-million miles in the past few years, and covers 10,000 more each week, while Ford makes and sells millions of cars each year. “If it is true, it makes perfect sense,” Davies emphasizes. What is more, Alex Davies is of the opinion that it is reasonable that Google would want to cooperate with an established – experienced – automaker, because the company has never needed to think about the tens of thousands of parts that must come together following incredibly strict federal guidelines, but also about the processes that require huge plants as well as specific competencies. It is possible to conclude that Ford has been doing all that for a century, so it knows a lot that Google does not. Moreover, it is important to stress that Ford has started to talk publicly about its autonomous driving research two years ago, including its interest in finding new partners. It is also necessary to emphasize that Mark Fields, the CEO of the Ford Motor Company, said that the company is actively looking to work with startups and bigger companies, and that that work is a priority for him. What is also crucial to stress here is the fact that this cooperation would make sense, because – as it was presented above – both Google’s and Ford’s approaches to autonomous driving are remarkably similar. The vast majority of automakers plans to deploy self-driving technologies progressively, adding features one-by-one so drivers cede control over time. However, what is also crucial to emphasize is the fact that Google decided to develop, to construct a car with no steering wheel, no pedals, and no role for the human other than sitting still and behaving while the car does the driving. Automobile manufacturers like Mercedes, Audi, GM, and Tesla plan to offer features that let the car do the driving some of the time, using the human as backup within the emergency situations. Nevertheless, due to the fact that this “level” of autonomous driving covers the issue of transferring safely the control between robot and human – particularly during the dangerous situations –, Google as well as Ford have decided to avoid that part. Moreover, Alex Davies is of the opinion that it is very unlikely that the Ford Motor Company will be contented with providing nothing but wheels, motors, and seats, while Google does all the relevant work. What is more, Bill Ford, the executive chairman and former CEO of the Ford Motor Company, stressed that the thing he does not want to witness is Ford reduced to the role of a hardware subcontractor for companies doing the more creative, innovative work. Furthermore, Dr. Ken Washington, Ford’s VP of research and advanced engineering, admitted that he wants the automaker to build its own technology. “We think that’s a job for Ford to do.

However, what is also necessary to present – while focusing on the issue of cooperation between Ford and Google – is the information that “Google’s Self-Driving Cars Aren’t as Good as Humans (Yet)”. Google has recently announced that its engineers assumed control of an autonomous vehicle 341 times between September 2014, and November 2015. That may sound like a lot, however it has to be presented that Google’s autonomous fleet covered 423,000 miles in that time. Furthermore, it has to be stressed that Google’s cars have never been at-fault in a crash, and Google’s data shows a meaningful drop in “driver disengagements” over the past year. Moreover, what is also crucial to stress, while focusing on the Google’s cars reliability, is the information that Google’s rate of disengagements is also far lower than those declared by other companies testing autonomous technology in California, including Nissan, VW, and Mercedes-Benz. It is crucial to emphasize that of the 341 instances where Google engineers took the wheel, 272 derived from the “overall stability of the autonomous driving system” – things like communication and system failures. What is also significant to present here is the fact that Chris Urmson, the project’s technical lead, does not find this issue very troubling, because – as he states – “hardware is not where Google is focusing its energy right now”. Moreover, it is also possible to conclude the Google team is more concerned with how the car makes decisions, and will make the software and hardware more robust before entering the market. However, while focusing on the remaining 69 takeovers it is necessary to emphasize that they concern more important issues – it is possible to conclude that they are “related to safe operation of the vehicle,” meaning those times when the autonomous car might have made a bad decision. It is also important to stress that due to the Google’s simulator program it is impossible to read openly any reports regarding these incidents – if the engineer in the car is not fully convinced that the AI will perform the appropriate action, she will take control of the vehicle, later, back at headquarters (Mountain View), she will transmit all of the car’s data into a computer and the team will see what the car would have done had she not taken the wheel. According to data Google recently has shared with the California Department of Motor Vehicles (DMV), 13 of those 69 incidents would have led to crashes. What has to be also stressed is the information that Google’s cars have driven 1.3 million miles since 2009. It is possible to conclude that they can identify hand signals from traffic officers and “think” at speeds no human can match. Nevertheless, it is crucial to emphasize that the Google cars have been involved in 17 crashes, but have never been at fault. Moreover, it has to be presented that Google had previously predicted the vehicles will be road-ready by 2020. At this point, the team usually is not able to solve problems with a rapid adjustment to the code. It has to be stressed that currently the challenges are far more sophisticated as well as complicated. Chris Urmson presents the following example: “In a recent case the car was on a one-lane road about to make a left, when it decided to turn tight instead – just as another car was using the bike lane to pass it on the right.”, “Our car was anticipating that since the other car was in the bike lane, he was going to make a right turn.”, “The Google car was ahead and had the right of way, so it was about to make the turn. Had the human not taken over, there would have been contact.” It is possible to conclude that avoiding repeating so fringe case is not easy, but Google informs that its simulator program “executes dozens of variations on situations the team has encountered in the real world,” which facilitates them test how the car would have react under slightly diverse circumstances. Nevertheless, it is crucial to emphasize that Google is getting better. It has to be stressed that the disengagement numbers have dropped over the past year. Eight of the 13 crash-likely incidents took place in the last three months of 2014, over the course of 53,000 miles. The other five occurred in the following 11 months and 370,000 miles. Assuming those 13 incidents would have ended with a crash that stands for one accident every 74,000 miles. “Good, but not as good as humans,” Urmson concludes. It has to be also presented that according to new data from the Virginia Tech Transportation Institute, Americans log one crash per 238,000 miles. It is possible to conclude that before bringing its technology to the market Google must make its cars safer than human drivers (who cause more than 90 percent of the crashes that kill more than 30,000 people in the US every year). “You need to be very thoughtful in doing this, but you don’t want the perfect to be the enemy of the good.” “We need to make sure we can get that out in the world in a timely fashion,” Urmson stresses. What is crucial to emphasize is the information that the Google’s disengagement numbers must keep dropping. The downward trend will maintain, Urmson says, but as the team begins testing in tougher conditions, like bad weather and busier urban areas, it will be possible to notice sporadic upticks. “As we push the car into more complicated situations, we would expect, naturally, to have it fail,” Urmson says. “But overall, the number should go down.

I would like to stress that the broadly defined automotive autonomy is to be one of the most interesting topics of the nearest future. In my opinion, both of the paths towards achieving the Level 4 as well as Level 5 of automation are fulfilled with sophisticated solutions and processes. I have to admit that the second path – the path selected by the two companies that have decided to avoid the problem of semi-autonomous technology, Google and Ford, is even more fascinating. I would also like to emphasize that even if the cooperation between those two companies will not be confirmed, both Google and Ford will be within the group comprised of the most powerful automobile manufacturers that will compete in the race for full automation, and within five years’ time we will have the opportunity to test how it is to use our vehicles without the necessity to drive. Furthermore, it is also crucial to remember about the two of the main elements enabling self-driving cars to ride – about the projects regarding the high-fidelity 3D maps that together with Radar and Lidar devices will be deployed in the autonomous vehicles. I believe that the systems composed of Lidar scanner and 3D maps will provide us the most accurate artificial drivers in terms of broadly defined safety. All things considered, I would like to emphasize that even though the age of autonomous vehicles is on its way, it is still possible to distinguish several problems to solve before we will be only acting – voluntarily – as passengers (behavior of the system at the high speeds, issue of trust, bad road conditions, and emergency situations).

google1

Picture:URL:http://www.wired.com/wp-content/uploads/2015/09/Screen-Shot-2014-12-22-at-2.10.41-PM.jpg or URL:http://www.wired.com/2016/01/google-autonomous-vehicles-human-intervention/

MZ

 

Tagged , , , ,

ARTIFICIAL EMOTIONAL INTELLIGENCE

Reading Time: 4 minutes

blogik

URL:http://www.wired.co.uk/magazine/archive/2015/11/features/wired-world-2016-alain-de-botton-artificial-emotional-intelligence/viewgallery/619427

Credit: Maya Stepien

 

Articles:

  • Six Areas that Artificial Emotional Intelligence will Revolutionize

Article written by Alain de Botton

Wired Magazine (UK Edition – November 2015 (21st October, 2015)

URL:http://www.wired.co.uk/magazine/archive/2015/11/features/wired-world-2016-alain-de-botton-artificial-emotional-intelligence

  • Has Artificial Intelligence Outsmarted our Emotions?

Article written by Meng Li

Wired Magazine

URL:http://www.wired.com/insights/2014/11/artificial-intelligence-emotions/

 

In today’s world, due to the rapid development of digital technologies, many of us have had already a chance to interact with the ‘everyday use’ appliances (gadgets, home-designated devices) or even bigger, more technologically-complicated machines – robots, which not only perform their ‘core’ tasks, but also may add some unique values to these broadly defined relations. The devices are becoming more and more artificially intelligent.

Meng Li, the author of the article “Has Artificial Intelligence Outsmarted Our Emotions?”, stresses that the history of the relationship with technology is simple – we purchase machines and devices that we expected to fulfill certain needs. Moreover, we interact with technology with predictable interchange. What is more, Meng Li is also of the opinion that the technology companies must recognize the connections people are forming with these ‘intelligent’ devices, and create procedures to develop products accordingly – “because the devices that will prevail are those that not only please us, but those that we also hope to please”. What is also important to stress is the fact that while thinking about the machines of the future – and to be more specific, while concentrating on the issue of Artificial Intelligence (AI) – a vast majority of people puts the emphasis on the performance of rational executive tasks. However, according to Alain de Botton (Wired Magazine: “Six Areas that Artificial Emotional Intelligence will Revolutionize”) another, more complex scenario should be demonstrated here – a scenario which bases on the aspects of emotions and psychological dimensions of existence. It is time to focus on Artificial Emotional Intelligence (AEI).

In everyday life people make a lot of mistakes. We have problems with making decisions, regardless of the type of obstacles we are facing. What also has to be stressed here is the issue of emotionally wise decisions – they cannot be treated as lucky decisions. We have to remember that even though we are not infallible, our brains are unique, extraordinary. The emotionally wise decisions are the result of the impressive performances of our brains and they are therefore logically also forms of intelligence that can be replicated and improved upon artificially, with the help of microchips and code. Nowadays, we are able to conclude that this very specific type of scarcity – the scarcity of wisdom will soon be vanishing. The emotional intelligence will be in the center of attention. Alain de Botton emphasizes that AEI will revolutionize the following six areas of life:

  • self-knowledge
  • education
  • news media
  • art
  • shopping
  • relationships

AEI will provide us the self-knowledge we need – it will map our brains and alert us in good time as to the reality of our psychological lives. Furthermore, we will know what job we should be doing, but also we will know whom we should try to form a relationship with, and how. Mr. de Botton emphasizes that “AEI will give us a picture of our inner selves which will stop us making catastrophic errors on the basis of an inability to interpret our emotional functioning and psychological potential”. Moreover, AEI will help us to evolve towards the best versions of ourselves. What has to be also stressed is the fact that with AEI we will have the opportunity to know how to lead people to information that is genuinely crucial for them and their nations. AEI will guarantee better media and, in turn, more democratic politics. AEI will also stand for encoding consumer intelligence – we will have the possibility to check our decisions on an AEI machine. The machine will inform us how we can be persuaded and motivated to purchase goods and services, taking into consideration our true needs. Finally, the relationship with the emotional intelligence machines will provide us with access to the friend’s wisdom when we would most need it.

In my opinion, the biggest obstacle here will be connected with the broadly defined aspect of trust. People are starting to engage in the relationships with machines, highly advanced devices that currently help us in setting and achieving specified goals, but also in producing and analyzing the results, all preformed using human-like feedback. Furthermore, what also has to be emphasized is the fact that the research has shown that even though people prefer robots which seem capable of conveying some degree of human emotion – facial features, interaction, human-like gestures – they are repulsed by robots that look and move almost human-like. What will be crucial in the AEI case is the broadly defined balance – adding human characteristics through AEI design is only appealing so long as the technology maintains its honest, outwardly robotic qualities. All things considered, both the software and hardware are taking on human tendencies what has a significant impact on the transformation, on the development of relationship between people and machines. Human-like gestures, the whole interaction helps to establish the sense of trust. People tend to seek for the acceptance and validation; feedback from the machines, however it is crucial to remember that this emotional interplay may be only accomplished with the very real power of artificially intelligent technology.

What if machines would take over and become versions of very devious, powerful people?

MZ

Tagged , ,