Apple just got granted a new patent for virtual audio. What it does is, it simulates sound from anywhere in the room. Around one month ago this patent was granted for special headphones. This patent was however for built-in MacBook speakers. This might be a sign that we can expect more apple products in the future with this technology.
It might sound rather simple and unimpressive at first thought, but if you think about how watching TV for example might change, it is simply awesome. Imagine you are watching a horror movie and you suddenly hear steps behind you. If that doesn’t get your feet tickling then I don’t know what will. E-sports are getting more and more popular as well, and in a lot of games hearing the steps or other sounds of the enemy are a crucial factor to win the game. Therefore games can become more realistic and exciting.
Furthermore, I don’t believe this only brings benefits to the entertainment sector, it might also be useful for certain jobs as well.
Let’s say we have created a remote controlled robot which helps fire fighters to rescue people from burning buildings, when somebody is trapped under a collapsed house, etc. hearing the person calling for help is useful, but it can take hours to actually locate where the person is shouting from. Through this technology the person could be located and helped quicker and more efficiently.
The same applies if we have remote controlled cars or trucks. Besides seeing through cameras what is going on it is still vital to hear, for example where sirens are coming from, in order to make space for the police or an ambulance, etc.I believe it is a great invention and in the future it will be very useful to make people feel that
they are actually present, whether it’s for entertainment or to save lives without risking their own.The system works through cross talk cancelling, making users feel/hear that the sounds are coming from a different place than from the speaker itself. According to Patently Apple this allows signals to contain “spatial cues” positioning the sound virtually in the room you are in. The sound waves are sent in different angles to create this illusion.
For a couple of days , is warning you when you are about to post a “potentially offensive” caption for a photo or video.
It works like that: Your caption gets compared to other captions that previously got reported by other users.
If your caption is not similar, you will not get any notification. However, if you are about to post an offensive caption, Instagram encourages you to change it. You are not getting forced to change anything. This is just an encouragement to rethink maybe if you really want to proceed.
The same concept exists since July already with comments.
Another feature they implemented already a while ago was the so-called “shadow ban,” making the comments under somebodies post only visible for the person who wrote the comment, but nobody else. This is basically the strategy of simply ignoring your bully till this mean person is getting simply too tired and stops.
As you can see Instagram is trying to create a bully and hate-free zone which I believe is extremely good since Instagram is not just used by adults but in fact, 72% of 13-17-year-olds use it (sproutsocial) and if we are completely honest with ourselves then wether it was ourselves or other people, teenagers can be really mean and rude sometimes.
However, even adults in a professional environment are writing something which another person might find offensive. For example, when you browse through Linkedin, a social network to find jobs, and you scroll through the comments you can here and there find heated discussions, which are maybe on a different level language-wise, but that doesn’t make a comment itself less offensive or inappropriate. Does the person mean it? Some probably, but I believe that it is also connected with being on the internet. Instead of reading the post or comment, we are publishing to millions of people again and respond in a calm and well-overthought manner; we click the button.
Therefore a small reminder that this comment might insult a different person might help to sit back for just one second to rethink whether you mean it or your emotions take over.
What do you think about the new way Instagram is going?
We all know the basic ingredients and calories table which are provided on our food.
Some countries have stricter regulations and some less strict. France, for example, requires companies to have a disclaimer when advertising a product with a lot of sugar, salt, etc.
Mc Donald’s and many other companies even advertise that their meat comes from local farmers.
So we basically know what’s in our product and sometimes also from where it is.
However, IBM is going one big step further. IBM Food Trust wants to give the customer much more information through the use of blockchain.
In the future, a French company Labeyrie will sell salmon from the Norwegian company Mitsubishi subsidiary Cermaq with an exact record tracking the salmon from egg to packaged in the supermarket. In the shop, the customers can then check everything from health, the diet the fish got, medication, and many other things by simply scanning a QR code.
Hard Fork announced recently that Nestlé, the leading food brand, joined forces with French retailer Carrefour to place baby milk formula on the blockchain, using IBM Food Trust as well.
To be more exact, Nestlé wants to provide the customer with all those information for two products, the GUIGOZ Bio 2 and 3 infant milk.
Carrefour even announced that they want to put a minimum of 20 percent of its “in-house” products on the blockchain by next year (2020).
JD.com, a popular online retailer from China, doubled the sales of its free-range chicken and attributed this success to the blockchain technology of IBM Food Trust.
And also, Auchan sells carrots, which are on the blockchain.
And I believe it’s safe to say that many more big companies will jump on this train.
I personally think that it is an awesome thing to know where your food is coming from and what we actually consume. IBM itself says in the trailer of IBM Food Trust that “The World Health Organisation estimates 600 million people fall ill after eating contaminated food, and 420.000 people die every year.” Where I myself see a problem is that sometimes animals do need medications because of same like us humans, they can get sick. But if you are completely honest to yourself, which would you choose, the healthy chicken or the one who got antibiotics? The consequence of that in the long-term is that when an animal is sick, it doesn’t get treated but simply killed because from there on it would be just a high cost for the treatment, food, transportation, etc. just to throw the meat in the end into the trash. It doesn’t only sound sad, but as we all know, more food waste also means that we are harming our planet and, therefore, ourselves.
Ask yourself with what are you connecting flying with?
Being free? Seeing the world from a different angle? Traveling and adventures?
All of it sounds amazing, but a lot of people are scared of flying because they can’t control what is going on. It all depends on the pilot sitting in the cockpit.
So what do you do when the pilot/s can’t fly anymore for whatever reason?
Hope for superman to save the plane you are sitting in?
Or maybe bees to land the plane softly?
Well, good news, you don‘t need good connections to Superman or a whole army of bees anymore.
Garmin has introduced a system that lets the plane navigate ad land themselves with just one push of a button.
After the button is pushed or the plane notices that the pilot isn’t responsive, the system determines the best place to land, considering circumstances like weather, fuel, runway size, etc.
Its system navigates the plane around terrain (e.g. Mountains) and bad weather, which could cause the aircraft to crash.
The system automatically notifies the air traffic control so that they can prepare for the landing, route other planes around you, and possibly prepare ambulance, firefighter trucks, or whatever is needed.
The interface of the G3000 flight deck changes to a simplified interface, which basically just allows the passengers to talk to the air traffic control and gives the information they need to know like landing time or any updates. This is most likely helpful to keep the passengers more relaxed and not panicking about all the radars and numbers on the displays, which might look scary, especially in such a moment.
For 2020 the models Piper M600 turboprop and Cirrus Vision Jet are trying to get authorization by the FAA. Older models with the G3000 cockpit could be upgraded, but the company didn’t confirm yet whether this option will be made available.
Piper M600 turboprop
Cirrus Vision Jet
As you can see, for now, there are no plans to implement this system into commercial planes, and it is a system that should only be used in emergencies. So maybe for now keep the connections with Superman and your bee friends.
However, in my opinion, this is a good indicator that in the future, planes will fly without pilots having to sit in the cockpits starting and landing the plane.
MIT is worldwide known for its robotics research. In the past, they managed to create a robot breaking the world record in solving a Rubik’s Cube in only 0.38 seconds, the first four-legged robot to do a backflip, etc.
Sounds impressive, but with their newest development, they can’t just look cool, but it can be used in many ways and bring the robots on to the next level of productivity.
Picking up objects and flipping them around is easy for people. We do it every day, e.g., when we are trying to take notes at University, work, or at home. We pick up the pen and bring it into the right position and start writing. The same scenario is when we are eating a sandwich: We move it a little bit to bite from the other corner.
For our robot friends, however, it is tough to pick up things without either dropping them or destroying them and then to add the factor of turning the object they just mastered to hold? It sounds like a difficult task.
Therefore it took robots a long time to plan and calculate all the factors like geometry, friction, all the possibilities of how the object can be turned, etc. This whole process took tens of minutes previously, which sounds still impressive, bearing in mind that if we measured and calculate these numbers, we would sit there for hours and probably still fail.
MIT mastered to bring down the planning time of the robot to less than a second.
How is that possible? The robot is pushing the object against a stationary surface and slides its claw down the object until it has it in the right position.
For the future, this can mean that instead of a specialized tool like a screwdriver, machines would have more something like a hand, giving them the ability to pick up different kinds of tools and do various tasks.
This improvement would most likely save the companies space and also money since, for multiple steps, they would need one robot.
This is another case were thinking out of the box, by simply using the surroundings, has a huge effect.