Category Archives: Bez kategorii

MHV technology

Image result for audi  mild hybrid

MHV (mild-hybrid vehicle) is an all-new technology, introduced on the market by Audi motor company. It is often confused with an already well known hybrid powertrain. Below I’ll explain how both powertrains work so that you can understand why a mild hybrid system is so good and efficient.
So basically, the traditional hybrid system has two types of engines – an internal combustion engine (petrol, diesel, LPG etc.) and an electric motor. Both engines are working 50/50, a mostly electric motor is working at lower speeds while the gas engine is working at higher speeds. This is a good system, but it’s very simple and outdated.
Mild hybrid, from the technical point of view, is something different. You could say that MHV also has two engines, but this time the internal combustion engine plays the main role, while the electric motor is only a support. I know, support may sound like not much, but there is a way more into it.

Image result for audi  mild hybrid
So, the electric engine in the MHV is the starter itself. While the traditional starter has only one function, which is starting up the engine. In MHV it also plays a role of an engine which is supporting the gas engine (in most of new Audi vehicles it is an addition of approximately 40 hp), an alternator which is charging the 48v battery while coasting (when a vehicle is moving but the gas pedal is depressed). Also, the battery is supplying the electric turbocharger, which increases the power of the engine. Generally speaking, the starter is a multi-purpose device which increases the performance of many components. It is a revolutionary invention in the automotive world.
Take a look at an educational movie made by Audi:



One of the most ambitious projects of the exhibition is the city of the future Woven City from Toyota. The municipality will be one large laboratory where scientists can test new technologies.

The company plans to build a city at the foot of Mount Fuji, 90 km from Tokyo. All vehicles in it will be unmanned and will be able to drive exclusively on hydrogen fuel cells – in the same way, according to Toyota’s plans, houses will be heated.

The area of ​​Woven City will be 70.8 hectares, all the houses in it will be built of wood, and solar panels will be installed on their roofs. The main idea of ​​the engineers is to create a city that will not be built around transport, but primarily for pedestrians. The streets of Woven City will be clearly divided into pedestrian and transport zones, and in addition to scientists and engineers, ordinary people will be able to live in it.

From my personal point of view, I think that this technology will change the world and contribute to human history. Also, all other companies should follow Toyota’s example to make our world become the future. If you have some words to say, please do not hesitate to leave comments below


and my own words))

Crew Dragon’s last straight

Nasa conducted a test of the Crew Dragon crew capsule, made in collaboration with SpaceX and Boeing. The agency wanted to check key emergency procedures for crew safety. It was spectacular.
The emergency in-flight abort system has just been successfully demonstrated.
On January 19th a thrice-flown Falcon 9 sent an uncrewed Crew Dragon 12 miles into the sky and after about 84 seconds after launch, the rocket shut off its engines, and the vehicle’s own SuperDraco engines turned on, separating Crew Dragon from Falcon 9 at Mach 2.2 and getting a mile away in a matter of seconds.
It was a key test of safety procedures for the Crew Dragon capsule. It is a project whose point is to re-supply people to space regularly. This is the first action of that kind that can be repeated since the suspension of space shuttle missions.

The test that has been made on 19th January at the afternoon of polish time and has gone as planned. Its purpose was to simulate irregularities during take-off. The mission’s task was to perform a controlled disconnection of the Crew Dragon capsule from the Falcon 9 rocket, which carried the capsule into space.
Disconnection took place about a minute and a half after take-off. Fifteen seconds later, due to strong vibrations, the Falcon 9 rocket exploded spectacularly. However, the mission command center provided for this possibility. It was even said that there is a small chance that the rocket would survive the test.

However, it survived the most important part of the mission, the capsule disconnected from the rocket without problems and began a short flight on the engines built into it. Less than 6 minutes after take-off, four new-generation parachutes emerged from the vehicle, which were designed to slow down the capsule flight and make its launch not particularly uncomfortable for the potential crew. After 9 minutes, Crew Dragon fell into the ocean about 30 km from the start. Rescue teams started their work, whose task was to train the procedures of getting the crew out of the capsule.


Tagged , , , ,

An electric car from Sony may soon be on its way

The next big step for smartphone makers has entered the TV market over the last several years. Xiaomi does this so do Apple, Htc and OnePlus. Sony appears to have taken a turning point, however, by introducing its first hybrid car model, shocking everybody. The announcement was made at the upcoming CES in Las Vegas in 2020.

The Sony Vision-S car (not the official name but its construction platform) is an electric sedan which has 33 sensors inside or outside. This car is a hybrid model. Many broadcast screens, adapters, and 360 audio systems are always available.

The car was built by Sony’s AI and robotics department, according to a study from CNet. Security Cocoon is the name of the car’s language and has been fully tested on the track. Sony said the road check was carried out to make sure the car complies with all safety regulations before exhibiting the car.

The Sony prototype car has also been equipped with Flight Time sensors that can detect and recognize individuals, objects inside and outside of the car. The car has the so called Level 2 Autonomous Fitness. In other words, the autonomous software can continue to drive, stop and even look. In case the system fails, drivers must, however, be supervised.

Sony provided very little additional information on the vehicle, according to several sources. However, it was a 4-seater car with two 200kW engines that became evident in some specs. The car can range from 0 to 60 mph in less than 5 seconds, according to an Engadget report. It also has a 149 mph high speed. Remember that no word exists as to whether Sony is formally going to ship or sell this vehicle. It’s still a prototype or a concept car that displays the different technologies from Sony. That said, you don’t know, in the not so distant future, if a Sony car is on the roads.





Technological threats


We are aware of new technologies. We know the purpose of creating AI. We all know the fields where it is used or where it can be improved even further. The main question is: How is it working?

The unspoken fact is that developers actually do not know what they are developing. We do not follow how it is learning, we only care about the results at this point. The AI is spreading pretty fast, but should we accept this fact and stick to something we can not really understand at this point? Yeah, I have heard about the accuracy of 80% and higher of positive solutions made by computers. With less effort and much less time wasted. That is not the point though. We are not living in numbers. The amount of data stored and analysed is uncountable. At the end of the day, it does not matter for us how many times machine learning will get something right. More important is what type of mistake we are considering. What will be its consequences and the number of people involved?

If we will be able to program things like us, what if we were programmed or we can be programmable?” ~Kristian Hammond

Actually, we should be terrified. Only way to defend AI development, as it is right now, is through accepting all science-fiction movies and the theories of us destroyed by some “Greater Mind”. We are aware of our cognition. It is easy for us to research a new topic with understanding of the path we followed. What is the way to describe AI cognition? How it is improving new skills? We are basing on such things as:

  • Emotions
  • Creativity
  • Humour
  • Consciousness
  • Intuition

Machine learning cannot take those elements to consider. There is no way to implicate it. We can share our experience by uploading and feeding programs with more data. Our own intellectual laziness makes us blind on the possible tragedy. What makes us believe that the algorithms we created will work properly? Look at the self-driving cars. Car is on the road. Legal speed, everything normal. Then a kid on the bike suddenly appears from the left side of the cars traffic pass, just before it. From the right, there is an older woman going straight under the wheels. Too late for breaks. What is the decision to make? Within those options: turn left- hit the kid, turn right- hit the woman, go straight and possibly make them both suffer. It is a moral dilemma. Personally, I do not know and will not ever make such a decision. It would be an impulse I suppose. That’s all. Life is life and we have no rights to choose between one or another. Let’s follow the process of self-driving car decision making. Which option is better? Who has more chance of surviving? Who is responsible? I would love to see the explanation of that kind decision making.

Autonomous weapons, social manipulation, invasion of privacy, discrimination. We can not let AI dominate these fields. One word connecting all globally successful people- INTUITION. Scientists, businessmen, trend influencers, all of them followed “the gut” which allows them to improve. How to define intuition? Where does it come from? How to implicate it into AI? We are not conscious about the source of it, so we can not program it what so ever. The change FBI’s Cyber Security Department statement is summing this up perfectly, from- “There are two types of organisations: those who suffered a cyber-attack, and those who didn’t yet” to- “There are two types of organisations: those who suffered a cyber-attack, and those who don’t know yet”.

We all know AI will kill many workplaces in a while. In my opinion, it will create almost the same amount. If the awareness of threats will raise we will need a huge amount of people to analyze its steps. People partnering AI will be necessary for the successful development of this technology. Manipulate questions to ask not to be manipulated. We need to lead, not follow.



Extended reality in the year 2020

We all at some point at our lives may have been involved in extended reality industry. Even the person, who on the first thought has nothing to do with AI development industry, may use latest AR technologies on the daily basis. The thing is, Extended Reality (XR from now on) has taken over our lives in the shortest amount of time, yet people seem not to notice it.

XR is, basically, the term, that is intended to include all the latest technologies, made to “manipulate” the reality and to create more immersive computer experiences.

Probably, the most famous and popular type of XR nowadays is Virtual Reality (VR from now on). It seems that VR found its way into gaming industry – indeed, the first thing we think if we hear the term “VR” are headsets like Vive, Oculus Rift etc. VR happens to fully immerse the person using it into “alternative reality” through headset, using manipulations with visual and auditory senses.

Alternative reality (AR) is the type of XR that projects certain objects onto real ones through, for example, camera of the smartphone – exactly how Instagram masks work.

Mixed Reality (MR) is the most recent and interesting type of XR – it allows real and digital objects to interact- for example, you can “put” any virtual object on your table and interact with it as if you would do with real-life one. It should be mentioned, that even though MR has a great potential, at this point of time it requires a lot of processing power and is definitely not completely ready to be used in everyday life, yet companies still explore it in order to use in the future. ( look: Microsoft’s HoloLens)


Innovative technique that converts plastic junk into fuel

It’s really hard to go by a day without using plastic products. They’re inexpensive, durable, and as a result – levels of plastic production by humans are high. We know about tons of plastic waste accumulated in oceans. By 2050, the ocean will have more plastic than fish, and that’s frightening. But how can we, humans, solve the problem of a growing number of plastic junk on our planet?

One of many ways to fight plastic pollution is to convert plastic trash deposits into something valuable. One of Purdue’s University teams came up with an idea to convert commonly used plastic into oil. As ACS Sustainable Chemistry and Engineering reports – the aforementioned process is way more energy-efficient than recycling and burning plastic waste. 

Nearly 25% of plastic junk is a polypropylene, used to make products of everyday life. Chemical engineer Linda Wang and her team focused on re-using this type of plastic (2th and 4th category of resin’s identification plastic code). Plastics are hydrocarbons that are made from petroleum and can be converted back to liquid fuel. The Purdue team uses a method called hydrothermal processing. 

There were scientists using this technique concerning other types of plastic before, but the yield of those processes has been very low. Wang’s team place the polypropylene in a reactor filled with water and hit it up to temperatures from 380–500°C for up to about five hours at a big pressure of 23 Megapascals. At this high heat and pressure, water breaks down a plastic and changes it into the oil. Purdue’s University team manage to transform approximately 90% of plastic into oil. 

Emergent oil can be used to make building blocks for commonly used fuels and chemicals. The team’s analysis shows the hydrothermal process uses less energy and produces fewer emissions then incinerating polypropylene plastics.

Linda Wang and her colleagues didn’t stop there. Now, they are working optimizing the production of gasoline and diesel fuels.



New Technique Converts Plastic Waste to Fuel

Robotic Process Automation

The idea that robots can replace humans is probably one of the most popular topos of science-fiction movies. Nonetheless, this far-fetched scenario can actually be real within a couple of years – thanks to nowadays development of Robotic Process Automation (RPA). It is a software that allows for mimicking human behavior with the use of artificial intelligence and machine learning. The base of this software are robots with a wide range of abilities; for example, they can: enter data, complete complex tasks, login, and log out of many systems. The remarkable trait of RPA is allowing to tackle complex repetitive business processes without human intervention.

As future entrepreneurs and employees, we should bear in mind this acronym. RPA comes with many advantages, such as cuts in costs, time, effort, and even cuts in the workforce. As the error rate of robots is at successful 0%, it is an entirely credible service that can be loosely scrutinized. Because of that property, RPA made its way into Accounts Payable (AP) Automation. Robots can handle alone invoices management, including actions such as reading invoices, extracting key details, pasting them into SAP, and sending emails regarding this document. Similarly, we can look into Legal Automation. As we all know, the traditional way of handling case documents in courts can be arduous. Piles of paper and data are a literal nightmare for lawyers that keep them away from work. This low-efficiency also influence the costs of legal processes. All that hopelessness can go away with the help of artificial intelligence (AI) and business process automation (BPA). Legal departments and in-house counsels can count on robots with help on e-discovery, analysis of data, and contracts management.


Looking into the Robotic Process Automation market, there are three foremost leaders of the RPA market: UiPath, Blue Prism Group, and Automation Anywhere. All of them are start-up launched in 2005 and earlier, but, in spite of venture capitals and fundings, they didn’t become successful until they entered the RPA market. But this market is continuously growing and becoming more creative. Many entrepreneurs are looking into innovations and integrate RPA with cognitive technologies: speech recognition and language processing or machine learning. Many businesses are setting this sort of automation, known as intelligent automation, on their strategic plans for this year.


Image result for rpa


To sum up, RPA is undoubtedly revolutionizing the whole entrepreneurship. It disrupts mundane work with the futuristic idea of robots. Not only can the benefits of automation be observed in all departments across huge companies, but also small households. With the use of Robotic Process Automation, we can finally excel in our work without wasting time on repetitive copy and paste formalization. As shown in the history of this technology, automation within the last years grew from its infancy to a fully demanded service. However, many fear that robots can take our jobs and increase unemployment rates. For example, government workers usually do tasks that can be easily automated, such as entering data or handling all bureaucracy. Similarly, low-qualified employees can ponder upon their place in digitalized work. On the contrary, we should think about how we can mitigate the repetitiveness of the job and complain about that error in the data, which forces us to re-enter all data into the system. This will enable us to focus on real work, customers, and finally enjoy the awe of life.



Contact lenses straight from sci-fi movies

Remember people in movies using that unbelievable technology out of this world? It turns out it’s now about to become our reality.

As our world is evolving and technology is developing, everything starts to look more and more like a science fiction movie. The year 2020 just started and already seems to be all about new innovations. After smartphone, smartwatch and smart tv now it’s time for the smart contact lens.

Last week during Consumer Electronics Show in Las Vegas, Mojo Vision, a startup from California, announced its newest innovation: the Mojo Lens. It is a contact lens using augmented reality to help people with sight problems. It could adjust light, contrast, zoom depending on eye defect and also help seeing in the dark.

However, it’s not only aiming at medical use. The company also says it could be used by anyone at work or in everyday life. It would show all the important information, like temperature or air quality, that could be useful in, for example, firefighters’ work.

It is built with a 14,000 pixel-per-inch display adjusted to a contact lens with radio and eye movement sensors. Mojo Lens should last the whole day and recharge in the case through induction. The display is to be located in the middle, right on the pupil and will aim exactly in the fovea (a part of the retina which allows to see the clearest).

Nevertheless, this project is still far from being completed. When showing the prototype, it was not possible to experience and see all the features that Mojo Vision is promising in the final lens but it was only possible to put it close to the eye but not directly on it. For now, it was only possible to see some green words and numbers appearing in front of you.

However, Steve Sinclair, the Vice President of product and marketing, when talking with Fast Company ensured everyone that it is a reachable device. “We’re really confident about this working,” “We’re seeing all the pieces coming together into a product that does everything we want it to do.” All in all, even though it is still work in progress, it is definitely a huge step forward for future research and innovations.





Nowadays we shop in much advanced technologically way. It’s not just that we can choose and order any product online but we’re able to choose a suitable cosmetics to our types of prettiness. 

Women cannot believe how it is possible to choose the colour of foundation or highlighter to their faces. Not only we need to know which type of beauty we have but the skin can be lighter, darker, dry, oil, even mixed but what is more even if we know exactly our type we have no guarantee that next to intensive, artificial light the chosen product may be seen exactly how we thought. Additionally many cosmetics after the contact with skin may be completely various from raw form of product.

Then how exactly can we choose the cosmetic by using new technologies?

One of the solutions was introduced by Lančome brand in 2017 when people came to the shop where they could buy the foundation choosing from 72 colours by scanning women’s faces. The algorithm defined which mixed of colours is the most suitable for a particular type of prettiness. 

But this idea couldn’t really work in ‘drugstore’ conditions due to the fact that more people shop in places like that. Technologies companies extend a helping hand in form offering the special mobile’s applications. The most popular application this type is ModiFace – which the L’Oreal brand is trying to own. ModiFace uses expanded reality to replace the screen of our phone into the mirror. Shortly, the potential client is able to see its face in which the cosmetic was put and appraise if he likes it or not. 

On the market there are now other applications that make our cosmetic’s shopping easier. For example CosmeticScan is an application which by using the camera, scan the code of the particular product and send us some informations about its composition, use and threats. 

Interesting example is also is CosmEthics app which returns us to the websites where we can find professional reviews as well as science articles. 

This technology’s revolution in cosmetic’s trade is not only about the mobile’s applications. The next innovative product which was introduced by L’Oreal company was bandaid UV which we stick to the body and in connection with mobile app warns us against threats connected with sun contact. It informs us about two indicators: the degree of threat of solar radiation and the daily dose of sun. 

The idea which was really surprising one was artificial hairbrush. That gadget by using technology, collect informations about brushing hair and next teach us how to make it correctly. 


What do you think will be next?