Today, I want to demonstrate you a great example of how object recognition technologies based on machine learning:
1) becoming widely available and do not require rare genius programming skills to get the result.
2) can be greatly trained even on a very modest in size data sets.
The article, that I have read some time ago, tells how a bird lover and part-time computer science professor, together with his students, taught the neural network to recognize the bird species and then — and that impressed me a lot – to distinguish individual species of woodpeckers, who flew to the bird feeder in his yard.
At the same time, 2450 photos in the training sample were enough to recognize eight different woodpeckers. The professor estimated the cost of a homemade station for the recognition and identification of birds at about $ 500. This is really can be called technology for everyone and machine intelligence in every yard.
Moreover, this technology can really help birds. As Lewis Barnett, the inventor of this technology wrote in his article : «Ornithologists need accurate data on how bird populations change over time. Since many species are very specific in their habitat needs when it comes to breeding, wintering and migration, fine-grained data could be useful for thinking about the effects of a changing landscape. Data on individual species like downy woodpeckers could then be matched with other information, such as land use maps, weather patterns, human population growth and so forth, to better understand the abundance of a local species over time»
As some people correctly noted, this technology has also some great commercial potential. Just imagine that camera traps will be able to recognize birds that harm your fruit trees and than activate a device that make a large noise to scare away pests.
There is a global economic gender gap in the AI workforce, which needs to be addressed as soon as possible if the industry doesn’t want to suffer soon – says one of the last WEF’s (World Economic Forum) articles.
Almost 80% of professionals with AI skills are male. Besides, a gender gap even three times larger than in other industries.
It’s no secret that demand for AI skills is increasing in demand second by second, while the industry might miss out on opportunities to innovate if it excludes half the population from the development process. Imagine you, how only a few women will then be able to participate in the economy as a whole! Still we all aware of the importance of diversity in all her manifestations, which mainly improves innovation, and technology itself.
“In an era when human skills are increasingly important and complementary to technology, the world cannot afford to deprive itself of women’s talent in sectors in which talent is already scarce”
In addition, the research found that women working in AI are less likely to be positioned in senior roles. The data demonstrate that women are generally work in the use and application of AI, with common positions including data analytics, research, and teaching, whereas men tend to work in the development of the technology itself as software engineers, heads of engineering or IT, or as chief executives. In short, women are “growing but not gaining”. Male AI professionals will continue to outnumber women, even as both genders continue to gain AI skills. At the current pace, WEF estimates it will take 202 years to close the gap women face in the workplace. That figure is based on differences in earnings, workforce participation and the number of women in top jobs.
Remember, there is always a way out, we should just make a step forward! To break the cycle of gender imbalance, it is critical to ensure that women at all stages of their careers are being inspired to actively take part in the development and use of new technologies and it concerns not only the case with AI.
“Industries must proactively hardwire gender parity in the future of work through effective training, reskilling and upskilling interventions and tangible job transition pathways, which will be key to narrowing these emerging gender gaps and reversing the trends we are seeing today. It’s in their long-term interest because diverse businesses perform better”
Not less important is the understanding of the ways that gender gaps manifest across different industries, occupations, and skills. Research and data can illuminate the persistent challenges faced by women while making decisions concerning employment.
＂Research across hundreds of brands in dozens of categories shows the most effective way to maximize customer value is to move beyond customer satisfaction and connect with customers at an emotional level.＂
– Harvard Business Review
If you need to make an insurance claim, you use an online form for that. In the case of opening new accounts, you simply fill in the form, then benefit from the quick mail response. Maybe you would like to take out a loan? Oh, it should be a piece of cake to speak with a chatbot briefly and learn everything about everything. Do you recognize yourself in the lines above?
The answer is unequivocally yes. We are constantly connected to a network. The companies improved efficiency as well as cut their costs by shifting to the digital contact with customers. But on the other side of the coin – for many businesses the emotional bond has been violated by digital strategy and efficiency that directly affected the cost of the brand, revenue growth, and outflow. This backdrop sets the scene for incredible innovation, as it became pretty complicated for clients to differentiate the values of the brand of two different companies. It’s all about digital commoditization.
FaceMe is a world-leading provider of Digital Humans via its Intelligent Digital Human Platform, created on the AI basis, which expands the brand opportunities to build reliable interaction with clients in the real time, based on customized content and unforgettable personalities, which build an emotional connection using the power of the human face. IBM Cloud technologies are used together with high-capacity of IBM Cloud bare metal servers in order to provide endless scalability of this technology for hundreds of simultaneous conversations. It’s such an organization that – to put it in a nutshell – enables organizations to reduce the cost to serve at the same time as enabling opportunity for growth and improving customer experience. The company now operates in New Zealand, the US, Australia, and Europe, working for global brands such as Vodafone and UBS. It’s available for customers through browsers, mobile phones or kiosks.
It is estimated by analytics that within the next decade nearly 85 % of communication with customers will be implemented only via digital channels. Mobile applications, web-portals, and chatbots will become even faster and more convenient, but the companies might have a difficult time building bridges with the clients in such a competitive environment.
I’m pretty sure that our future reality will draw a lot of eyeballs. At least for the reason that Digital Humans will process question in just 100 milliseconds, during which they will converting text from a chatbot into key human qualities through both the ability to respond with speech, facial expression or gestures and also apply dynamic reaction based on customers behavior and emotion. The client, in turn, perceives almost immediate response that means that conversation flows good, and feels as comfortable as talking with the real agent.
Bringing emotional connection to the digital world is as crucial in the context of business as in solving pressing issues, related to health and well-being, education, environment, and many other spheres. Take the example of psychological health. You know, the first important step is just to make patients talk. As studies have shown, 63 % of people would prefer talking about problems of their psychological health with Digital Humans. Therefore, there is a great opportunity to make a valuable contribution to society. FaceMe also works with the Centre for Digital Business to create digital reading instructors, who can help children with reading problems, for whom there is a shortage of qualified teachers. One more potential use case is a provision of consultations and emotional support for patients, recovering after heart surgeries.
Technologies of IBM and FaceMe represent a powerful combination, which intends to change the customer’s experience all over the world. Remember, there is no restriction in our ambitions and ability to make a positive contribution in the society by introducing emotional connection in the digital world.
A long-standing goal of an interaction between humans and computers has been to enable people to have a free conversation with machines, as they would with each other. In recent years, we have witnessed a revolution in the ability of computers to understand and to generate natural speech, especially with the application of deep neural networks.
One of the inventions in this area was Google Duplex. As you probably know, Duplex is a technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine. For example, Duplex can automatically reserve a table for you in a restaurant, using a phone call to a manager.
While Google is still testing and developing their new system on a small amount of Pixel phone Users, another giant tech company Alibaba already has a working model. It is used not for restaurants, but for an even narrower niche – the delivery of goods. At an annual AI research gathering, the e-commerce giant demoed a sample conversation where the voice-assistant was tasked to ask a customer where the package should be delivered.
The most amazing thing is that Alibaba’s voice assistant was able to deal with some controversial situations during the dialog such as interruption (pauses), nonlinear conversation (customer starts a new line of inquiry), and implicit intent (customer doesn’t explicitly says what he actually means). It is an amazing new which also once again underlines the superiority of China in the field of artificial intelligence, by the way. Currently, the agent is used only to coordinate package deliveries, but it could also be expanded to handle other topics.
Stock up on strong nerves as well as fat wallets to make use of the world’s firstairborne automobiles on an industrial scale.
The idea of Urban Air Mobility (UAM) so far remains pie in the sky for someone, but it’s not for aviation giant Airbus, German car manufacturer Audi, and Italdesign design house. The trio collaborated to develop real, near-term urban mobility solutions for avoiding rush hour, unveiling a scale-model prototype of a flying drone-car called the “Pop.Up Next”. Why are we still obsessing, if there’s a chance just to fly over traffic jam that we are sick of?
“This important partnership with Audi addresses both current and future challenges for urban mobility. As a first concrete milestone in the cooperation we are developing, we will be offering multi-modal transportation solutions to the world’s most congested cities,” said Airbus CEO Tom Enders. “The world is rapidly urbanizing, and ground infrastructure alone cannot meet the demands of tomorrow. Increased congestion is pushing the cities’ transport systems to the limits, costing travellers and municipalities valuable time and money. Adding the sky as a third dimension to the urban transport networks is going to revolutionise the way we live – and Airbus is ready to shape and build that future of flight.”
Well, the car-drone combo is made up of three separate modules, which allow customers to have a seamless and ultra-convenient travel experience. The key component is a capsule for passengers that is 2.5 meters long and 1.4 meters high. Its modular design means that it can easily unhook from its chassis with wheels and get airlifted by an accompanying drone, which is autonomous and powered by 4 rotors. The so-called cubicle can attach to a battery-powered chassis to become a two-seater electric car. Without the cubicle, the chassis can drive only 100 kilometers on a single charge, so we are faced with not a long-distance vehicle concept, unfortunately.
No less interesting is the fact that passengers can summon the drone using a phone app that can also suggest the best transport solutions as well as ride-sharing demand and relative cost. The capsule can even be paired with other futuristic transport systems, like the Hyperloop.
Despite the fact that the model’s test flight went off without a fault, Airbus executive Jean Brice Dumont is prudently optimistic about when the Pop.Up system will actually become a reality.
“I think it will take more than a decade until a real significant, massive deployment of an air taxi system” is ready, he said, and “for this we need to tick a list of boxes. The vehicle is one, safety is the overarching one, infrastructure is one, acceptability is another one.”
However, the serious competitor Uber turned out to be even more ambitious. Last year, the transportation company revealed an artist’s impression of a sleek machine with the goal to start using for demonstration flights of the year 2020 and by 2023 for actual use. Uber’s battery-powered aircraft looks like a blend of a small plane and a helicopter with fixed wings and rotors.
So, what to expect from state-of-the-art technology?
You must agree, that anxiety comes in this case by itself. The barrier standing between you and a future of commuting through the skies is bravery. The fact that you may feel uneasy in a flying car, that was once just a stupid dream from your childhood is hardly a shocker. That’s not to say that they’re unsafe, anyway, the parachute, of course, should help. You’re still going to have to entrust your life to a vehicle. And yet, the attractiveness of freedom and faster journeys might be sufficient for many people to roll the dice.
Have you seen Minority Report directed by Steven Spielberg?
For those who haven’t I recommend watching it because the prophecy of this film begins to meet.
Due to the fact that technological process is constantly developing and thanks to that the meaning of Artificial Intelligence in our lives increase, we can definitely be scared about this what is happening around us.
Have you ever been thinking about which is one of the most intimate things in people lives?
It is sexuality orientation. Nowadays many people hide them real sexuality in fear of social indignation for example: sport players, family members, schoolmates. This people have to bother with this inside battle of “coming-out” every single day and now it is going to be worse. Nowadays the AI can guess whether you are gay or straight based of photos of your face. It is the fact not the opinion! Now we can say that machines started to be better “gaydar” than people. The study work from Stanford University – has found algorithm which could distinguish with 81% of accuracy whether you are gay or straight for men and with 74% of accuracy for women.
The algorithm was tested on machine intelligence which had to research of 35 000 facial photos from the one of dating sites and thanks to that had find out the real sexual orientation.
“The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.” – Sam Levin, The Guardian
Okay what if I am straight?
The authors of study which was published in the Journal of Personality and Social Psychology, Dr. Michal Kosinski and Yilun Wang, claim that this algorithm can also be used as a similar AI system which could be trained to spot others human traits such as IQ or political views. They are also warning us against this AI develop process because it can turn into something that we don’t really want to in our lives.
It is happening now!
Police in the UK are piloting a new project which provides to use AI to determines how someone is likely to commit the crime. Seems familiar? Back to that what I wrote at the beginning of my post, Steven Spielberg (Director) and Philip K. Dick (writer) were right. AI is going to prevent us from committing the crime.
“(…) The system has 1,400 indicators from this data that can help flag someone who may commit a crime, such as how many times someone has committed a crime with assistance as well as how many people in their network have committed crimes. People in the database who are flagged by the system’s algorithm as being prone to violent acts will get a “risk score,” New Scientist reported, which signals their chances of committing a serious crime in the future. (…)
(…) Donnelly told the New Scientist that they don’t plan to arrest anyone before they’ve committed a crime, but that they want to provide to those who the system indicates might need it. He also noted that there have been cuts to police funding recently, so something like NDAS (National Data Analytics Solution) could help streamline and prioritize the process of determining who in their databases most needs intervention. (…)”
– Melanie Ehrenkranz, gizmodo.com
The project now is in its infancy in comparison to how important it can be for the future of the justice system.
To sum up my post, there are billions of facial images of people that are publicly available on social media sites, government databases and also these ones which come from the streets cameras. In my opinion we should try to care more about our privacy in a media and don’t let the governments to have that serious impact on our lives because as we know the systems are like people, they sometimes fail.
The term Artificial intelligence (AI) refers to (re)creating natural intelligence (human and animal) based on machine learning in computer science area. The goals of AI are for machines and/or computers to replicate/mimic human reflection and reaction. Among others, the focus on AI developing capabilities are the following: (1) problem solving: identifying the action needed to archive and solve at best a specific goal given (2) Reasoning: identifying and acknowledge diverse events, knowledge, relations and other in order to decide on the best solution and analyze its consequences (3) Planning: identifying step by step tasks needed to archive a specific project/goal within sets of time (4) Perception: capable to interpret surroundings (such as voice, face and object) thanks to sensors such as camera, tactile sensors and microphone (5) Motion and manipulation: distinguish his localization, navigate in it environment, manipulate objects, use physical motions and gestures (6) Learning (automatic, semi-supervised and supervised): learn complex relations between its actions and its results for specific improvements on it performance needed such as able to move more effectively. (7) Natural language processing: ability to understand human basic language in specific language and able to communicate/engage in conversation (8) social intelligence: To do these sort of tasks, AI machine must have required human intelligence, therefore these machines must require as well; the ability to fit within our social norms and express and/or perceive emotions
This last capability is much more important for social robots; seen already utilized in warehouses and factory, Robots are now slowly but surely entering our homes and private public places sectors which increase the value of it machines by gradually shaping our new daily life style by being more and more accessible and available seen these past few years.
Lea – The robot assistance for elderly person by The Robot Care:
Being on the market since last year, Lea was built for the purpose of helping elder person to walk or sit thanks to it very though full design, connect with family and friends thanks to an integrated screen and it video communication system, ensuring the safety of it owner thanks to sensors while being able to assist in physical activities, helping to motivate elder people to move and be more active.
Savioke Relay – The service robot in hospitality industry by Savioke
Provide personnel customer service especially in hotels but also in workplaces (used in about 17 localization using it device, counting major hotel brands such as Sheraton, The Westin, Marriott and others), primarily used as transport small items from one person to another.
Buddy – The home companion and assistant robot by Blue Frog Robotics
Built with a screen to represent the “face” and “head” and moving on wheels, Buddy is a “companion” robot that offer assistance (ex:controlling smart home appliances), entrainment at home (play with kids) and interfaces but more importantly, Buddy is here to make your home more secure as he can be used as a movable camera and can detect anything going on in your own home when you are not there and send you a signal when there is something fishy going on when you are out.
In the past as well as today, the issues for robots to be placed in our home sphere and being able to communicate with, lies less on the lack of it technology development but more on boundaries, ethical and moral issues and the complexity of our social norms in human society. These issues turned to create further complex questions if social robots would be accepted into human’s daily life style. Developers have tried to focus on according these concerns by creating “acceptable” robots as this technology brings a lot of concerns. Andrea Thomaz, an associate professor of electrical and computer engineering at the university of Texas at Austin, explain this situation briefly in her TEDtalks speech: “The Next Frontier in Robotics: Social, Collaborative Robots” where also explain the importance of assembling few different capabilities into the social robots that we have cited and describe at the beginning of the article.
For robots to become a further developed social companion however, it is much more complex as it could seem. For that, robots should not only recognize behavioral pattern, but also be able to communicate and act as we human-to-human. They should be able to detect therefore, human emotions, sarcasm and motives to react on them as we are living within a constant dynamic environment. Thanks to that, the two-way flow actions between robot-to-human goal achievement would be much more easy and possible but the development of this specific capability us much more complex to attain. However, the recent availability of robots for home as companion, with Buddy (available to pre-order since mid 2016) or Jibo (available to purchase since end of 2017) have risen hopes for possible further development in such a robot to acquire their maximum potential to soon really become the reason our life changing daily routines.
Have you ever thought that your clothes would assimilate with your body?
Just imagine, your all-natural reactions, behavior and processes are now all captivated by the shoes that you are wearing every day, some T-shirt, dress. Can you come with the idea that your favorite sweater might change a color every time you catch a flu or that you would be able to see the ignorance or anger of the person not through his facial mimics, but due to the movement of the fabrics in his or her shirt?
In the near future, it can be used not only as the protection from the outside of your body factors, but as a real part of you that compliments you, your physical processes and would be an integral part of the communication with the surrounding world.
Nowadays, such assumption is already not a fantasy, but a real fact. There are already created numerous examples of the clothes that seem to not only provide comfort, but to visualize and assist the hidden physical and mental processes of the human body.
The well-known fact is that, people have always been demonstrating their individuality through the clothes they are wearing, in order to create a specific social position in the conversation with others. In fact, the innovations in technology have provided us with the opportunity to express ourselves in a broader way.
The use of technology in fashion is now no more a tool to produce the clothes, but a new way to execute the human’s inside vision.
The newly invented fabrics and materials can correspond and show up the human’s emotions and condition. In result, a person wearing such clothes will be already unique due to the uniqueness of his or her own emotions and feelings, because the inner world of each human is already exclusive.
Therefore, as well, the high-tech clothes would be able to open a real conversation with a human body itself and due to physical factors and sensory processing the clothes would be able to interact with the body and, consequently, transform itself to satisfy the wearer.
The Dutch designer Anouk Whipprecht through the combination of fashion and robotics has created a new realm of personal space. Due to the ability of biosensors to catch any evidence of a dress wearer feeling threatened, the robotic spider legs, built-in dress, create a “spider attack” position.
Social Escape Dress
As well, as Anouk Whipprecht, the designers from Urban Armor have been working on the problem of the personal space within a big city. In fact, the Social Escape Dress is equipped with GSR sensors (Galvanic Skin Response) that indicate a strong feeling of stress. As a result, the dress would emit a cloud of fog from the collar, if the wearer feels stressed and uncomfortable in any surrounding.
By experimenting with the smart fabrics, robotics, sensory processing and eye-tracking technology, the Montreal designer Ying Gao got a chance to create the garment, which is complimenting its “user”. It is a series of two dresses, which have the ability to change the form and light up, if someone is looking at the person wearing these dresses. According to The Culture Trip: “It is normally the wearer who notices your gaze, not the clothes themselves, but this innovative creation has everyone’s head turning”.
Swarovski gemstone headpiece
The creative group “The Unseen”, headed by Lauren Bowker, has always been mixing the fashion and science. The Lauren Bowker, as the press calls her, “The Alchemist” has come with the invention of the ink that changes the color of the item it is applied on. Among a large majority of her inventions, like the hair dye that changes color, the headpiece created for Swarovski is the one that is acting in accordance with human body condition. The headpiece is constructed by the gemstones, which have been specially grown and later covered by the specific ink, created by Lauren, to become real indicators of the shift of amount of energy of the human. In fact, when the gemstones “feel” the transition of the condition of the human, they change color. As the Dezeen has stated: “Excitement, nerves, and fear all produce different colours; and quicker shifts in emotion create more dramatic patterns.”
Bubelle emotion sensing dress
About a decade ago, Philips have been experimenting with the combination of smart electronics and fashion and, in fact, there was created one of the first garments that react to the wearer’s emotions. The dress is covered with numerous LEDs that illuminate and change colors due to the emotional condition of the human. Such changes are caught by the bottom layer of the dress, in which there have been installed the skin sensors.
Caress of the Gaze
Behnaz Farahi has developed an interactive garment. The 3D printed collar seems to become an artificial skin that due to built-in camera is able to catch the gaze of the other person and react on it by the lifelike motions. The garment is equipped with the Shape Memory Alloy actuators, which work as a muscle system for this 3D printed “skin”. The idea is to create a conversation not only “person to person”, but “person to person to clothes”.
To conclude, the technological innovation and, in fact, technological symbiosis with fashion have given us the ability to create a new way of expressing ourselves and interacting with the outer world through demonstrating our real inner processes. There, the clothes become a medium between a wearer and the surrounding. Such processes are assisting a human to enter a different level of interaction with the others, where the garment might become an equal conversational partner for people and the world.
But is it the thing that a person wants? There, step up millions of ethical questions. Would we be able to uncover our thoughts, our mood, health status, etc.? Would we be able to sacrifice the privacy of our own bodies and show up the real “I”?
Artificial Intelligence is a growing trend in technology. Most of us are aware of the fact that its importance is steadily increasing. Some people say AI is, in fact, the most important topic for our future.
Looks like we should start getting used to it and finding a place for AI in our lives. Of course, it is widely used in business already, Accenture calls AI ‘fuel for growth’. However, what I wanted to talk about is a solution that can help you learn, relax or fight insomnia thanks to Artificial Intelligence.
Brain.fm is a service offering AI-composed music that will help you trigger specific cognitive states. This includes deep focus, relaxation, and sleep. Music is obviously powerful and we know that. That’s a rule all movie soundtracks are based on. But Brain.fm goes even further. The product has a lot of scientific research behind. Starting with how neuronal oscillations control cognitive processes to showing how music entrains neuronal oscillations.
There are plenty of researchers from various universities behind Brain.fm. There are even experiments ran to measure effects of the AI-composed music on cognition. Measures included Reaction Time (RT), Go-No Go (GNG), Visual Pattern Recognition (VPR) and EEG. Detailed results are presented here, but well, you probably guessed them already ;).
What’s interesting, even athletes use Brain.fm. It lets them focus and then relax and meditate. For athletes, especially those from Olympic teams, the pressure is huge. Psychological aspects of sports are as important as physical ones. It’s not something new that during finals of the most important championships, mind is equally important as body and only people who have nerves of steel will win. Robby Smith is the US Olympic wrestling team captain and he was using Brain.fm during Rio preparations. This is a great example of balanced preparations and another use of AI.
The solution sounds brilliant, doesn’t it? What’s worrisome a little is the legal info. If the AI-composed music so beneficial to us, why do have to be so careful about it? If it might be dangerous to epileptics, pregnant women, people wearing a pacemaker of ones who drank alcohol and took medications, then how can we be sure it’s safe to the rest of us? Isn’t it just too misterious, with no explanation following the legal info?
For sure, the solution is brilliant and can help many people who can’t focus or sleep. Knowing the reality of today’s and huge problems people have with focus, seems like the demand will be only growing. But can we really trust Brain.fm? I’m sure we’ll find out about it soon!
How many Ukrainian startups do you know? At best – Looksery (purchased by Snapchat), Grammarly and Jooble.
Too bad. Theres far more to learn about.
Here are 5 hottest startups of 2016 that are about to break through. Whether you are a tech-lover, startuper, or an investor – have a look at the list and learn where the real deal is.
Ain.ua online portal, devoted to startups and entrepreneurship, has conducted a study among business angels, investors, and entrepreneurs, who were asked to point out startups, which developed and achieved something in the past year.
“Sixa is a full computer that operates right from the cloud via a client app. It supports various devices and is capable of running the most demanding applications. With Sixa users can easily deploy a cloud computer optimized for different tasks and access a powerful virtual computer without hardware upgrades” (www.sixa.io). This app not only allows you to have a powerful computer without actually having it, but also save the costs of utility bills and free you from buying expensive hardware. In 2016 Sixa has received the investments from legendary Y Combinator, receive $300,000 from the venture fund TMT Investments, and finally received $3,5 mln from Californian fund Tandem Capital, abd Ukrainian Digital Future and Horizon capital.
Currently, they launched beta-version of the service, that everybody can try for free.
Mobalytics – Personal performance analytics for competitive gamers
The platform measures players’ performance, helps them define their strengths and weaknesses and provides personalized advice, based on Gamer Performance Index, on how to improve your skills. The app was first introduced during TechCruch Disrupt in San Francisco, where it victoriously took the 1st prize. Shortly after the big triumph the startup received $2,6 mln investments from Almaz Capital, Founders Fund, General Catalyst and GGV Capital.
People.ai – sales productivity platform powered by AI
The app analyzes sales managers’ performance and displays the statistics to the employer, helping to identify who works best and who underperforms. Startup received the investments from the Y Combinator, as well as many funds from business angels.
In 2016 Hideez launched their first product – Hideez Key, which stores all the digital keys and certificates on one device, allowing unblock smartphones, laptops, open RFID-locks or authorize mobile payments. It also works as a anti-thief sensor for the thongs like wallet or keys, as it distances from the smartphone it is connected to, the app will notify the user of the loss.
SolarGaps is all-in-one energy solution. With the help of solar panels on the blinds, which generate electricity from solar energy, you can supply yourself with own electricity and save on your electricity bills.
This great invention has received a support from the investors of $1 mln over this summer. So, we are definitely going to hear about them soon.