Author Archives: Grams Bartosz

Autonomous Learning

Reading Time: 2 minutes

Google is creating AI-powered robots that navigate without human intervention—a prerequisite to being useful in the real world. It took 10 minutes for baby fawn to be able to stand and seven hours to walk. While autonomous robots are already familiar concept, autonomously learning robots are at the beginning of their development. Existing learning algorithms are still relying on human intervention.

The work is based on a project that took place one year ago, when the group discovered how to get the robot to learn in the real world. But a human still had to take care of the robot, and manually conflict  hundreds of times, says Jie Tan, a paper coauthor who leads the robotics locomotion team at Google Brain. “Initially I didn’t think about that,” he says.

So they began to solve this new problem. First, they restricted the area that the robot could explore and told him to train on many maneuvers simultaneously. If the robot reaches the edge of the boundary while learning to walk forward, it changes direction and begins learning to reverse.

Secondly, the researchers also limited the robot’s trial movements, making it careful enough to minimize damage caused by repeated falling. At a time when the robot inevitably collapsed, they added another encoded algorithm to help it stand up.

Thanks to these various modifications, the robot has learned to walk independently on several different surfaces, including a flat surface, a memory foam mattress and a wiper with slots. The work shows the potential of future applications that may require robots to move in difficult and unknown terrain without human presence.

Limit(AI)tions

Reading Time: 2 minutes

Nowadays computers have learned how to translate languages, drive cars or diagnose a disease. They outsmart profesionals at strategic games, recognize complicated patterns and provide weather.

In spite of the fact that they are already powerful, artificial intelligence has its borders.

When it comes to situations they haven’t come across before, machine-learning systems can be confused and will not execute their tasks properly.

Such thing exist because of one reason. AI systems do not understand causation. They see that some events have something in common with other ones, but can not grasp cause and effect. It is like if you knew that if the presence of sun made the hot day more possible, but you did not know the sun caused the rise of temperature.

Grasping cause and effect is huge part of what we know as common sense. Artificial Inteligence does not have it. There is a growing consent that AI progress will be grounded ic computers will not handle causation better. If machines would learn how to differ cause and effect, there would not be a need to teach them everything anew all the time. They could use its skills form one area and try to implement it to another one. What is more, the trust to the AI-powered machines would rise extremely if we knew that they trustworthy as they would not make silly mistakes.

We can not predict how long will it take for computers to get reasonable causal reasoning abilities. The first move will be to develop machine-learning tools that mix data with available scientific knowledge.“We have a lot of knowledge that resides in the human skull which is not utilized.”

 

Sources:

blog.huree.co

towardsdatascience.com

TerraSentia & Agriculture

Reading Time: 2 minutes

Girish Chowdhary, an agricultural engineer at the University of Illinois at Urbana-Champaign, presented the AI-based project. It is a robot, named TerraSentia, which resembled an improved version of a lawn mower, with all-terrain wheels and a high-resolution camera on each side.

 

TerraSentia’s way of navigating is similar to the one used by self-driving cars. The robot sends out thousands of laser pulses in order to scan its environment

“It’s going to measure the height of each plant,” Dr. Chowdhary said.

Of course, it would do that and more. The robot is made to make the most detailed portrait possible of a field, beginning with the size and health of the plants to the number and quality of ears each corn plant will produce by the end of the season, so that agronomists can breed even better crops in the future.

“The idea is that robots can automate the phenotyping process and make these measurements more reliable,” Dr. Chowdhary said. Thanks to that, farmers will be able to optimize the yield of farms much more efficiently than ever before.

Agriculture has always endeavored to be as much automized as possible. Nowadays, current farm equipment is regularly outfitted with sensors that use machine learning and robotics to identify weeds and calculate the amount of herbicide that is needed to be sprayed or to detect and pick strawberries.

Sowing a niche

It is now a global thing that demands on agriculture are rising. According to the United Nations, the human population is expected to rise to 11.2 billion by 2100. To feed the whole population, with less land, fewer resources and climate change problems – farmers will have to develop their technological intelligence.

“There’s definitely a niche for this kind of robot,” said Neil Hausmann, who oversees research and development at Corteva. “It provides standardized, objective data that we use to make a lot of our decisions. We use it in breeding and product advancement, in deciding which product is the best, which ones to move forward and which ones will have the right characteristics for growers in different parts of the country.”

There is no need for farmers to have special expertise to operate the TerraSentia. It is almost fully autonomous. The TerraSentia has already been tested in a wide variety of fields, including corn, soybean, sorghum, cotton, wheat, tomatoes, strawberries, citrus crops, apple orchards, almond farms and vineyards.

 

 

 

 

 

 

Hide your smartphone

Reading Time: 2 minutes

Google has announced three projects meant to get you out in the real world and away from your phone. The apps are designed by Google’s Digital Wellbeing Experiments and aim to make your phone barely functional. Currently, it works only with Pixel3a.


One involves sticking your phone into an envelope, sealing it, and using it only as a camera or a basic keypad to dial numbers. If you do have a Pixel 3A, download the required Play Store app for the envelope, called Envelope, then print out the PDF for the envelope, cut out the template, and follow the instructions to construct it. Then, when you’re ready for a break from your phone, open up the Envelope app, slide your Pixel 3A into the envelope, and seal the envelope shut — the PDF recommends using glue. Once your phone is sealed in the envelope, you’ll only be able to dial phone numbers on the phone, use speed dial, or have the phone tell you the time by flashing the numbers on the number pad.

The second, Screen Stopwatch, transforms your home screen into a giant timer every time you unlock your phone—it’s supposed to make you more aware of your phone usage. And the last, Activity Bubbles, represents your activity in bubble shapes. The longer the session, the bigger the bubbles; the more the sessions, the more bubbly your home screen.

But can it actually make us use our phones less? Sure. By sealing your phone and making it harder to use, you’re cutting out any potential side jaunts into Twitter or Instagram to stalk a frenemy. Measuring the time you spend on your phone isn’t a new concept , but it certainly helps to quantify the time you spend on a screen.

Sources:

https://www.technologyreview.com/

https://techcrunch.com/2020/01/22/googles-new-experimental-apps-focus-on-reducing-screen-time-including-one-that-uses-a-paper-envelope/

Crew Dragon’s last straight

Reading Time: 2 minutes

Nasa conducted a test of the Crew Dragon crew capsule, made in collaboration with SpaceX and Boeing. The agency wanted to check key emergency procedures for crew safety. It was spectacular.
The emergency in-flight abort system has just been successfully demonstrated.
On January 19th a thrice-flown Falcon 9 sent an uncrewed Crew Dragon 12 miles into the sky and after about 84 seconds after launch, the rocket shut off its engines, and the vehicle’s own SuperDraco engines turned on, separating Crew Dragon from Falcon 9 at Mach 2.2 and getting a mile away in a matter of seconds.
It was a key test of safety procedures for the Crew Dragon capsule. It is a project whose point is to re-supply people to space regularly. This is the first action of that kind that can be repeated since the suspension of space shuttle missions.

The test that has been made on 19th January at the afternoon of polish time and has gone as planned. Its purpose was to simulate irregularities during take-off. The mission’s task was to perform a controlled disconnection of the Crew Dragon capsule from the Falcon 9 rocket, which carried the capsule into space.
Disconnection took place about a minute and a half after take-off. Fifteen seconds later, due to strong vibrations, the Falcon 9 rocket exploded spectacularly. However, the mission command center provided for this possibility. It was even said that there is a small chance that the rocket would survive the test.

However, it survived the most important part of the mission, the capsule disconnected from the rocket without problems and began a short flight on the engines built into it. Less than 6 minutes after take-off, four new-generation parachutes emerged from the vehicle, which were designed to slow down the capsule flight and make its launch not particularly uncomfortable for the potential crew. After 9 minutes, Crew Dragon fell into the ocean about 30 km from the start. Rescue teams started their work, whose task was to train the procedures of getting the crew out of the capsule.

Sources:

https://www.technologyreview.com

https://www.space.com/spacex-crew-dragon-in-flight-abort-test-photos.html

Tagged , , , ,