Category Archives: Devices

MACHINE LEARNING AND IT’S BLISS ON NETFLIX

Reading Time: 4 minutes

INTRODUCTION:

As the world’s leading Internet television network with over 160 million members in over 190 countries, our members enjoy hundreds of millions of hours of content per day, including original series, documentaries and feature films. Of course, all our all-time favourites are right on our hands, and that is where machine learning has taken it’s berth on the podium. This is where we will dive into Machine Learning.

MONEY HEIST(2017)

Machine learning impacts many exciting areas throughout our company. Historically, personalization has been the most well-known area, where machine learning powers our recommendation algorithms. We’re also using machine learning to help shape our catalogue of movies and TV shows by learning characteristics that make content successful. Machine Learning also enables us by giving the freedom to optimize video and audio encoding, adaptive bitrate selection, and our in-house Content Delivery Network.

I believe that using machine learning as a whole can open up a lot of perspectives in our lives, where we need to push forward the state-of-the-art. This means coming up with new ideas and testing them out, be it new models and algorithms or improvements to existing ones.

Operating a large-scale recommendation system is a complex undertaking: it requires high availability and throughput, involves many services and teams, and the environment of the recommender system changes every second. In this we will introduce RecSysOps a set of best practices and lessons that we learned while operating large-scale recommendation systems at Netflix. These practices helped us to keep our system healthy:

 1) reducing our firefighting time, 2) focusing on innovations and 3) building trust with our stakeholders.

RecSysOps has four key components: issue detection, issue prediction, issue diagnosis and issue resolution.

Within the four components of RecSysOps, issue detection is the most critical one because it triggers the rest of steps. Lacking a good issue detection setup is like driving a car with your eyes closed.

ALL YOUR FAVOURITE MOVIES AND TV SHOWS RIGHT HERE!

The very first step is to incorporate all the known best practices from related disciplines, as creating recommendation systems includes procedures like software engineering and machine learning, this includes all DevOps and MLOps practices such as unit testing, integration testing, continuous integration, checks on data volume and checks on model metrics.

The second step is to monitor the system end-to-end from your perspective. In a large-scale recommendation system there are many teams that often are involved and from the perspective of an ML team we have both upstream teams (who provide data) and downstream teams (who consume the model).

The third step for getting a comprehensive coverage is to understand your stakeholders’ concerns. The best way to increase the coverage of the issue detection component. In the context of our recommender systems, they have two major perspectives: our members and items.

Detecting production issues quickly is great but it is even better if we can predict those issues and fix them before they are in production. For example, proper cold-starting of an item (e.g. a new movie, show, or game) is important at Netflix because each item only launches once, just like Zara, after the demand is gone then a new product launches.

Once an issue is identified with either one of detection or prediction models, next phase is to find the root cause. The first step in this process is to reproduce the issue in isolation. The next step after reproducing the issue is to figure out if the issue is related to inputs of the ML model or the model itself. Once the root cause of an issue is identified, the next step is to fix the issue. This part is similar to typical software engineering: we can have a short-term hotfix or a long-term solution. Beyond fixing the issue another phase of issue resolution is improving RecSysOps itself. Finally, it is important to make RecSysOps as frictionless as possible. This makes the operations smooth and the system more reliable.

NETFLIX: A BLESSING IN DISGUISE

To conclude In this blog post I introduced RecSysOps with a set of best practices and lessons that we’ve learned at Netflix. I think these patterns are useful to consider for anyone operating a real-world recommendation system to keep it performing well and improve it over time. Overall, putting these aspects together has helped us significantly reduce issues, increased trust with our stakeholders, and allowed us to focus on innovation.

BY: SHANNUL H. MAWLONG

Sources: https://netflixtechblog.medium.com/recsysops-best-practices-for-operating-a-large-scale-recommender-system-95bbe195a841

https://research.netflix.com/research-area/machine-learning

References:

[1] Eric Breck, Shanqing Cai, Eric Nielsen, Michael Salib, and D. Sculley. 2017. The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction. In Proceedings of IEEE Big Data.Google Scholar

[2] Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett(Eds.). Curran Associates, Inc., 4765–4774.

Stanford Students Create AI Glasses That Transcribe Speech in Real-Time for Deaf People

Reading Time: 2 minutes
This Augmented Reality Tool Could Change Communication for Some Deaf and  Hearing Impaired People | TranscribeGlass attaches to any pair of glasses  and projects real-time subtitles in the user's field of vision :
TranscribeGlass co-founder Tom Pritsky demonstrates the use of a prototype of the device. The Master’s student recently joined Yale student Madhav Lavakare at the startup aiming to provide live captioning at an accessible price. (Photo: CAMERON DURAN/The Stanford Daily)

In a world increasingly driven by technology, the need for inclusive innovations that solve communication issues cannot be overstated. TranscribeGlass is a groundbreaking invention by Stanford students Madhav Lavakare and Tom Pritsky. These AI-powered device has the potential to transform the lives of millions of people with hearing impairments, offering them a real-time transcription of spoken language right in front of their eyes.

The concept behind TranscribeGlass is pretty simple, however still genius. Leveraging existing speech-to-text technology, the glasses seamlessly convert spoken words into text, which is then projected onto the user’s lenses. This superb solution provides a transformative way for people with hearing loss to actively participate in conversations, navigate social interactions, and engage more fully with the world around them.

The implications of this technology are far-reaching. TranscribeGlass has the potential to break down barriers for people with hearing impairments, enabling them to engage more confidently in education, employment, and social settings. It can empower individuals to pursue their aspirations without the barriers of hearing loss, fostering a more equitable society.

Nevertheless, despite is immense potential, it is crucial to acknowledge the challenges that lie ahead for TranscribeGlass. One concern is that some individuals with hearing loss may fear that it draws attention to their disability. Additionally, what is even more concerning is the accuracy and reliability of speech-to-text technology can vary depending on the speaker’s accent, background noise, and the speed of speech.

To address these concerns, it is essential to spread awareness and acceptance of TranscribeGlass and other assistive technologies. Open discussions and demonstrations can help for example to encourage individuals with hearing loss to embrace these tools. Furthermore, continuous advancements in speech-to-text technology will enhance the accuracy and reliability of real-time transcription, making TranscribeGlass more and more effective in facilitating communication.

The development of TranscribeGlass represents a significant milestone towards inclusive communication. By harnessing the power of technology, Stanford students have created a tool that has the potential to change the lives of millions of people with hearing loss. As we move forward, it is our collective responsibility to ensure that these innovations are embraced, developed, and made accessible to all, fostering a world where everyone can participate and thrive.

https://www.thehindu.com/sci-tech/technology/how-a-stanford-startup-built-ar-glasses-for-the-hearing-impaired/article67135174.ece

https://www.scientificamerican.com/article/new-glasses-can-transcribe-speech-in-real-time/

Underwater Robotics

Reading Time: 3 minutes

Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs) are two types of underwater robotic systems that play an increasingly significant role in ocean exploration, scientific research, and various industrial operations. Although both systems are designed to operate underwater, they differ in terms of how they are controlled and the tasks they are capable of performing. Collectively, both AUVs and ROVs are categorized as Unmanned Underwater Vehicles (UUVs).

Two autonomous underwater vehicles resting on land
Project Wilton Iver AUVs, courtesy of our partner, SeeByte



An AUV is an autonomous underwater vehicle that often (but not always) operates independently of direct human control. It is equipped with various sensors, instruments, and navigation systems that allow it to perform a range of tasks, including mapping the ocean floor, collecting environmental data, and conducting scientific surveys at sea. Ideally, AUVs are programmed to perform specific missions and have the ability to make decisions based on real-time data, making them a great candidate for conducting long-term, repetitive missions. However, due to the lack of remote off-grid power solutions, big-data transmissions, and edge-compute capabilities, the current generation of AUVs still have a limited operational reach and require interventions of human operators.

A remotely operated vehicle inspecting underwater structures
Subsea 7’s AIV performing a mid-water riser inspection using sonar, courtesy of our partner, SeeByte



Remotely Operated Vehicles (ROVs), on the other hand, are underwater robots that are often controlled by a human operator. Like their AUV counterparts, ROVs are also equipped with cameras, lights, and various sensors that allow them to perform tasks (such as inspections, maintenance, and repair on underwater structures and vessels). ROVs can also be equipped with sampling tools and other scientific instruments, making them useful for conducting research missions. ROVs play a very prominent role in deep-sea scientific missions for studying benthic ecosystems, such as during the EV Nautilus cruises. (More on this later in another post.)
The main advantage of ROVs is that they allow for direct human control, which can be especially useful in situations where real-time decision-making is required. This makes ROVs ideal for missions that require a high degree of precision and control, such as the inspection of underwater pipelines, the repair of underwater communication cables, or the removal of debris from shipwrecks. Additionally, ROVs can be equipped with manipulator arms and other tools, making them capable of performing tasks that are (currently) not possible with AUVs.

Despite the differences between AUVs and ROVs, both systems play an important role in a variety of industries. In the oil and gas industry, for example, both types of underwater robots are used for exploration and production, as well as for monitoring and maintenance of underwater pipelines and platforms. In scientific research, both AUVs and ROVs are used for oceanographic surveys, as well as for monitoring ocean ecosystems and the effects of climate change.

As the blue tech industry continues to advance, it is likely that UUVs will play an even greater role in ocean exploration, scientific research, and industrial operations in the years to come, making them a pivotal component of the rapidly growing blue economy.

As for me the article is a clear and concise explanation of the differences between AUVs and ROVs, two types of underwater robotic systems that are widely used in the blue economy. It provides a brief overview of the main features, advantages, and disadvantages of each system, as well as some examples of how they are used in various industries and applications. The article also uses relevant images and links to illustrate the concepts and provide more information for the interested readers.

However, the article could also be improved in some ways. For instance, it could provide more details on the current challenges and limitations of AUVs and ROVs, such as the technical, operational, and regulatory issues that affect their performance and deployment. It could also discuss some of the emerging trends and innovations in the field of underwater robotics, such as the development of hybrid systems that combine the features of both AUVs and ROVs, or the use of artificial intelligence and machine learning to enhance the autonomy and capabilities of UUVs. It could also address some of the ethical and social implications of using UUVs in the ocean, such as the potential impacts on the marine environment and biodiversity, or the legal and moral responsibilities of the operators and users of UUVs.

Overall, the article is a good introduction to the topic of underwater robotics, but it could also go deeper and more critical in its analysis and discussion.

Resources: Underwater Robotics. Autonomous Underwater Vehicles, AUVs, ROVs | Ocean Motion Tech Blog (medium.com)

Images:
https://miro.medium.com/v2/resize:fit:786/format:webp/1*97hjk-NauNJkmtIHqlBAUQ.jpeg
https://miro.medium.com/v2/resize:fit:786/format:webp/1*FnMPVGUqgsx4xqOndGssIA.jpeg

Tagged , ,

Quantum Computing: Unveiling the Future of Computing

Reading Time: 2 minutes

Quantum computing stands at the forefront of technology, leveraging the principles of quantum mechanics to tackle challenges too intricate for traditional computers. IBM Quantum pioneers this field, providing real quantum hardware to developers worldwide, a concept unimaginable just three decades ago. Here’s a breakdown of this transformative technology and why it’s crucial for the future.

Why Quantum Computing?

In the realm of supercomputers, classical machines excel at complex tasks but struggle with intricate problems, where numerous variables interact in convoluted ways. Tasks like simulating molecular behavior or detecting subtle fraud patterns in financial transactions pose challenges beyond classical capabilities. Quantum computers, however, manipulate quantum bits (qubits), enabling the creation of multidimensional computational spaces. Unlike classical counterparts, quantum algorithms efficiently solve intricate problems like chemical simulations, holding immense potential for diverse fields, from medicine to semiconductor design.

How Quantum Computers Work

At the heart of quantum computing lies the qubit, the quantum counterpart of classical bits. Unlike classical processors, quantum processors require extremely low temperatures, just above absolute zero, to prevent decoherence, a loss of quantum states. Achieved through super-cooled superfluids, superconductors enable qubits to exist in states of superposition and entanglement.

  1. Superposition: Qubits, when in a state of superposition, represent all possible configurations, forming complex computational spaces crucial for intricate problem-solving.
  2. Entanglement: Quantum entanglement correlates the behavior of two qubits, where changes in one directly affect the other, facilitating synchronized operations.
  3. Interference: Quantum interference manipulates waves of probabilities in superpositioned qubits. Through selective interference, undesirable outcomes cancel out, while amplified outcomes provide solutions to computations.

Applications Across Industries

Industries worldwide are recognizing the potential of quantum computing:

  • Medicine: Advancing drug discovery and molecular simulations.
  • Finance: Detecting intricate fraud patterns and optimizing trading strategies.
  • Logistics: Solving complex route optimization problems.
  • Energy: Revolutionizing materials for renewable energy solutions.
  • Manufacturing: Enhancing complex supply chain management.

As quantum hardware and algorithms progress, a new era of problem-solving emerges. Quantum computing is poised to redefine the boundaries of what’s possible, revolutionizing how we approach complex challenges in science, technology, and beyond. Stay tuned for a future powered by quantum possibilities.

For my opinion, in essence, the article provides a basic understanding of quantum computing but lacks critical analysis, practical examples, and expert insights, leaving us with unanswered questions about its real-world significance and challenges. But more information you can get from official website IBM.

Sources: https://www.ibm.com/topics/quantum-computing

Tagged ,

The Rise and Reality of AR and VR in Education, Healthcare, and Training

Reading Time: 3 minutes

Augmented Reality (AR) and Virtual Reality (VR) are immersive technologies that create realistic simulations of real or imagined environments. They have been increasingly used in various fields, such as entertainment, gaming, tourism, and education. However, one of the most promising and impactful applications of AR and VR is in healthcare, where they can enhance medical training, diagnosis, treatment, and patient care. In this article, we will explore the benefits and challenges of using AR and VR in healthcare education and training, and present some examples of how they are being implemented in different settings.


Benefits of AR and VR in Healthcare:

AR and VR offer several advantages for healthcare education and training, such as:

  • They provide a risk-free, controlled, and personalised environment, that is also engaging and interactive. This enables learners to practice various skills and scenarios without harming themselves or others, and to receive immediate feedback and guidance1.
  • They enable repeatable and scalable immersive simulations that can accommodate different levels of difficulty and complexity. Learners can master the basics before advancing to more challenging tasks, and access the simulations anytime and anywhere .
  • They enhance the realism and fidelity of the simulations by incorporating sensory inputs, such as visual, auditory, haptic, and olfactory stimuli. Learners can experience the situations as close as possible to reality, and develop their situational awareness and decision-making abilities .
  • They facilitate collaboration and communication among learners and instructors, as well as between different disciplines and specialties. Learners can work in teams, share perspectives, learn from each other, and develop their interpersonal skills.

Read more about benefits >


Examples of AR and VR in Healthcare:

There are many examples of how AR and VR are being used in healthcare education and training across different domains, such as:

  • Anatomy: AR and VR can help learners visualize the structure and function of the human body in 3D, without the need for cadavers or models. For instance, the HoloAnatomy app uses Microsoft HoloLens to display holographic images of the human anatomy that can be manipulated by gestures. Similarly, the 3D Organon VR Anatomy app uses Oculus Rift to display interactive models of the human anatomy that can be explored by controllers.
  • Surgery: AR and VR can help learners practice surgical procedures in a realistic and safe environment, without the need for live patients or animals. For example, the PrecisionOS system uses Oculus Quest to provide immersive surgical simulations for orthopedic education. Likewise, the Osso VR system uses HTC Vive to provide surgical simulations for various specialties.
  • Resuscitation: AR and VR can help learners perform cardiopulmonary resuscitation (CPR) in a lifelike scenario, without the need for manikins or actors. For instance, the ResusVR app uses Google Cardboard to provide a 360-degree video of a CPR scenario that can be controlled by voice commands. Similarly, the CPR Simulator app uses Samsung Gear VR to provide a 3D simulation of a CPR scenario that can be controlled by head movements.

Read more about examples>


Challenges of AR and VR in Healthcare:

Despite the benefits of AR and VR in healthcare education and training, there are also some challenges that need to be addressed, such as:

  • Cost: AR and VR and software can be expensive to acquire, maintain, update, and integrate with existing systems. Moreover, they may require additional resources such as space, power, internet connection, technical support, etc.
  • Accessibility: AR and VR devices may not be widely available or compatible with different platforms or standards. Moreover, they may pose some barriers for users with disabilities or special needs12.
  • Quality: AR and VR content may vary in quality depending on the source, design, development, validation, evaluation, etc. Moreover, they may contain errors or inaccuracies that could affect the learning outcomes or patient safety.
  • Ethics: AR and VR may raise some ethical issues regarding the privacy, consent, confidentiality, ownership, etc. of the data or images used. 

Read more about challenges>


Conclusion:

AR and VR are transforming healthcare education and training by providing immersive simulations that enhance learning outcomes and patient care. However, they also pose some challenges that need to be overcome by further research, development, and regulation. Therefore, it is important to adopt a balanced approach that considers both the pros and cons of using AR and VR in healthcare education and training.



Sources:

  1. https://healthtechmagazine.net/article/2022/12/ar-vr-medical-training-2023-perfcon
  2. https://www.sciencedaily.com/releases/2021/07/210706115417.htm
  3. https://www.frontiersin.org/articles/10.3389/frobt.2021.612949/full
  4. https://bmjopen.bmj.com/content/11/8/e047004
  5. https://soeonline.american.edu/blog/benefits-of-virtual-reality-in-education/
  6. Bing AI – reedited the post

Tagged ,

Capilot AI Assistant in Windows 11: Will It Make or Break Microsoft? Will Microsoft finally beat Apple?

Reading Time: 4 minutes
Announcing Microsoft Copilot, your everyday AI companion - The Official  Microsoft Blog

In today’s rapidly evolving world, where technological advancements seem to know no bounds, it is always remarkable to witness how innovations can still surprise us. Microsoft, a renowned tech giant, has once again piqued our interest with the unveiling of their latest AI tool.

As Windows 11 is set to launch on November 1st, the spotlight is on the new features and innovations that Microsoft promises to bring to the table. Among the most anticipated additions to the operating system is the Capilot AI Assistant. Capilot has been in the spotlight ever since its earlier pre-version release on September 26. In this article, we will delve deeper into the potential impact of Capilot and find out whether it can help Microsoft surpass Apple.


Capilot AI Assistant: A Game-Changer or a Liability?

Capilot is Microsoft’s answer to the growing demand for AI-powered virtual assistants. With the success of Apple’s Siri, Amazon’s Alexa, and Google Assistant, Microsoft has been striving to create a better multi-functional AI assistant that can cater to the needs of Windows users.

Reviews and Early Impressions

To assess the potential of Capilot, it is crucial that we examine the early reviews from users who had been using the Capilot pre-version.

“Of course, many Windows users will want to stick with the old way of doing things. But they’ll be missing out on many opportunities. In less than three days with Copilot, I find myself frequently turning to it for answers. The feature will be a boon to students and anyone who needs to compose text or create images, both of which it excels at. Copilot has plenty of room for improvement, though, particularly when it comes to changing settings, opening apps, and opening web pages.” – said Michael Muchmore in PC Magazine.

We see that the author has mostly found Copilot to be highly useful in a short span of time, indicating its practical value and leaving a quite positive review, though mentioning some inconveniences. But we should also take into consideration that he was using only the pre-version, so the full extent of Copilot’s capabilities may yet to be fully realized.

Introducing Microsoft 365 Copilot | Microsoft 365 Blog

Potential for Success

  • Enhanced Integration with Windows Ecosystem

Capilot’s ability to seamlessly integrate with Windows 11 is a key strength. Users can expect it to excel in performing OS-related tasks and navigating the Windows environment effectively. This deep integration sets it apart from standalone AI assistants.

  • Continuous Improvement

Microsoft’s track record of refining its products over time suggests that Capilot could evolve into a formidable competitor. Regular updates and refinements can help address initial limitations and enhance the user experience.

Potential for a Fall

  • Competition

The AI assistant market is crowded, with established players like Apple and Google. To gain an edge, Capilot needs to not only match but surpass the capabilities of its competitors. Its success depends on Microsoft’s ability to close the gap effectively.

  • Privacy Concerns

As with all AI assistants, privacy and data security are paramount. Any mishandling of user data or security breaches could severely damage Capilot’s reputation.


My opinion about whether Capilot can help Microsoft beat Apple?

The question of whether Capilot can help Microsoft surpass Apple is a complex one. Apple’s Siri has a strong foothold in the mobile and smart home ecosystem, which may be challenging for Microsoft to match. However, the integration of Capilot with Windows 11 and the potential for cross-device functionality could prove enticing to users deeply embedded in the Microsoft ecosystem.

Of course people will not abandon Apple easily, as I myself would definitely not, but if it will really made a difference and make AI usage easier and more comfortable then ever before maybe then all people gradually will start switching to the Microsoft. In the end, I think Capilot’s success will largely depend on how effectively Microsoft addresses Capilot’s limitations and leverages its strengths. At the beginning, it may not necessarily need to surpass Apple but carve out its niche within the Windows ecosystem, continue to evolve and meet user expectations. If it will happen, then Capilot will have the potential to change the way we interact with not only Windows devices but all devices overall and, of course this way beat Apple. But it probably will not happen in the near future.

In the coming months, as Windows 11 is officially released, users will have a clearer picture of Capilot’s capabilities and its impact on Microsoft’s future. Whether it’s a game-changer or a liability remains to be seen, but one thing is certain: Capilot will play a pivotal role in shaping Microsoft’s trajectory in the AI assistant market.

But what do you think about the upcoming launch of Capilot? Will it be successful?

Sources(reference):

  1. https://www.economist.com/briefing/2023/09/27/how-microsoft-could-supplant-apple-as-the-worlds-most-valuable-firm (the news – insperation – itself)
  2. https://blogs.microsoft.com/blog/2023/09/21/announcing-microsoft-copilot-your-everyday-ai-companion/ (the announcement from Microsoft about the launch)
  3. https://www.pcmag.com/news/hands-on-with-microsoft-copilot-in-windows-11-your-latest-ai-assistant. (Michael Muchmore’s whole review)
  4. https://youtu.be/Bh22j250NCg?si=UgcyLdQvCNNSTWaQ (YouTube vide in-depth review)
  5. https://youtu.be/B2-8wrF9Okc?si=PgiOqOw7e-Im9txl (Microsoft’s own video presentation of the product)

AI generators used:

  1. ChatGPT 3.5 (key words: Microsoft 11 launch, Capilot, fail or success, positive and negative aspects)

Tagged , , ,

Robots from 2023

Reading Time: 2 minutes

Robots in our time are no longer considered something supernova and amazing. However, all the same, progress in their development is taking place, and quite large-scale. Previously. These were stupid machines that were controlled by people and had a very narrow range of capabilities.

Fortunately, or not, over the past 5-10 tears, the field of robotics has advanced a lot. And the company Nvidia has made a significant contribution to the development. At the International Consumer Electronics show this year, all the top robots had Nvidia brains. And all such robots are created for completely different areas of our lives.

For example, a baby stroller GluxKind “Ella” with artificial intelligence. It can rock the baby on its own, help with descents and ascents uphill, and has many sensors to monitor the environment. Such sensors allow the stroller to move independently, so if there is no child in the stroller, then you do not have to carry it, since it can follow you by itself, while avoiding obstacles. And of course, the stroller can be connected to the phone and tracked using GPS.

A delivery robot from Neubility was also introduced. This unmanned robot has joined the ranks of delivery robots. It was developed larger and more reliable than many of those that are already in the market. Since many delivery robots are hacked or used for other purposes. In theory, on-board cameras should record those who harm the robot, but it is unclear hoe long it will take until people leave it alone. So, the company is considering more secure options for the robot to work. For instance, golf courses, hospitals, resorts. In my opinion, it is really difficult to lead such delivery robots into real life due to the fact that there are a lot of external unpredictable factors. However, in hospitals and in some quite protected places, such innovations can be very useful.

Another interesting development was presented. Control tower for autonomous parking.

Seoul Robotics has taken a different path towards the commercialization of autonomous vehicles. Instead of developing and embedding the entire autonomous driving system, including sensors, into the vehicle, Seoul is turning to the surrounding infrastructure.

Seoul Robotics claims that its LV5 CTRL TWR helps provide information about the environment and selects the safest path for the vehicle. This approach can be much cheaper than introducing this technology into every car. It could also help provide fewer catastrophic problems and allow older cars to interact better with newer autonomous vehicles, while providing a viable low-cost upgrade for those who wanted the latest cars that are not currently autonomous to work as if they were.

Sources:

https://www.zdnet.com/article/best-robots-ai-ces-2023/

https://www.zdnet.com/article/best-robots-ai-ces-2023/

Tagged

What is the next step for improvement of our online connection?

Reading Time: 2 minutes
A man on a small red sofa talking with a TV screen with a floating face inside

In 2019, the whole world changed and was forced to move everything online, as we were heading to 2021, recovering from difficult months of lockdowns, people have felt the urge to go out and to meet others in person, while those others could prefer to stay at home. It was difficult to make some to go, for instance to the offices, so the solution was found – hybrid mode. It was considered to the helping hand and solution for people to be connected but staying at their comfortable space.

Now, we are at the beginning of 2023, and the new issue has been identified, the lack of feeling of being connected, some people can turn off their cameras, others can simply not answer. Logitech – a Swiss company that is concentrated on developing innovation products in the tech industry, recently they have announced a Telepresence Video Booth, which was called ‘Project Ghost’.

Life-size, eye-to-eye conversations are ensured by new mechanism of using a mirror to project the video chat. ‘Project Ghost’ – is a separated Video Booth, that assure people’s have ability to connect all over the world being secure and private during the call.

This novelty is a great step towards acquainting people closer to one another even for those that are not able to meet in person, it allows to have direct eye contact, which is impossible when it comes to Zoom, Skype or other similar platforms.

Sure, such inventions have drawbacks, in this case, the size and price. It is not possible to install a lot of such Video Booth even in big offices, as they require a lot of space. By putting them too close, we are taking some part of the privacy of the conversations, which was actually, one of the main advantages of the ‘Project Ghost’. What is more, the price can possible be not as huge as several hundred thousand, but more if instead of the mirror the glasses-free 3D part was used, which is taking some part of the quality of the video series.

So, there is a battle between practicality and neediness, connection, or convenience. The only thing left is to observe the changes, as we will eventually come to some better ways of connection rather than Zoom calls.

To read more about the ‘Project Ghost’: https://www.theverge.com/2023/1/31/23577918/logitech-steelcase-project-ghost-video-chat-booth-starline

Tagged , ,

Satellite internet is the future !

Reading Time: 2 minutes

These days, access to high-speed internet is a must-have. Connection to virtual platforms gives unlimited possibilities and it is hard to imagine not being able to use the Internet whenever we want to. In this article i will present to you the idea of Low-Earth orbit satellites that can provide milions of people with well-working broadcast.

Of course, we are already familiar with the Starlink project by SpaceX, founded by Elon Musk , but this is only the beginning when it comes to revolutionising satellite internet.Companies such as the already mentioned SpaceX and others like Amazon or OneWeb have plans on launching dozens of devices that would reinforce the 4g and enable a great access to Internet from everyplace on earth (even on planes and boats !) for an affordable price.

Of course it is a huge investment and the payback would be spread over many years, although I believe that for such huge companies this will not be a problem and they will be able to take the risk. Amazon in its “Project Kuiper” involves launching over 3 thousand sattelites creating a big competition for already existing starlink.

But what does it accually change ? Satellites themselves are getting cheaper and what we are witnessing right now could be a revolution in the mobile phone industry and internet providers. these brands will create a whole new market and be a huge competitor to mobile networks. Low-orbit satellites will be a great invention also in the field of gaming , because the delay in the games would decrease, giving a better expirience for the customers. Nowadays everyone uses classic solutions, but who knows maybe someday we won’t be using terrestrial Internet at all.

Although it is a brilliant idea, there are a few obstacles on the road to success. I have already mentioned the money aspect. Buying new machines that need to be launched in space is of course a huge expense, but there are other important aspects except from money and the internet. The SpaceTrash problem comes to the fore. Sending thousands of satellites into orbit can cause a concentration of waste in that area, which could lead to objects falling toward the earth’s surface. Another thing is that it could possibly hinder the work of astronomers , as light pollution would increase significantly.

In such cases deployment of new equipment to the space zone should be regulated by law. This is an aspect that is difficult for me to assess, because on the one hand it is a no-man’s land , although intense action in this sphere can affect the welfare of all countries and our ecosystem, even if it’s purpose is to help mankind and make our lives easier.

In my opinon , this idea would be great for isolated countries/islands or household’s far away from bigger cities. For me it is an interesting topic and I would love to see your opinion on it in the comments section.

Sources :

https://www.vox.com/recode/2023/1/10/23548291/elon-musk-starlink-space-internet-satellites-amazon-oneweb

https://www.vice.com/en/article/wjvjkw/spacex-and-amazon-hope-to-deliver-cheap-broadband-with-low-orbit-satellites

Neurochips Synchron VS Elon Musk’s ambitions

Reading Time: 4 minutes

Synchron company became the first developer of brain-computer interfaces, which introduced its Stentrode implant into the head of an American. Now a person who has long lost the ability to move and speak will be able to control a web browser and communicate via email using the power of thought. He became the fifth person in the world with an implanted neurochip.

The new Synchron project was implemented under the guidance of specialists from Mount Sinai West Medical Center. Neurointerventional surgeon Shahram Majidi made an incision in the patient’s neck and inserted a miniature sensor into the brain. Stentrode — a 1.5—inch-long device consisting of wires and electrodes – opened and began to fuse with the outer edges of the blood vessel. According to Majidi, this procedure was very similar to implantation of a coronary stent, so it took only a few minutes.

Electrode matrix Stentrode implanted into a blood vessel inside the skull

When the operation was completed, the sensor was connected via a physical cable to a computing device previously implanted in the patient’s chest. To do this, surgeons have created a “tunnel” for wires and a pocket for the device under the skin — on the same principle as modern pacemakers are installed. After that, the neural interface was activated and connected to the computer. After 48 hours, the patient has already gone home.

ALS, also known as motor neuron disease is a rare but terrible disease: the atrophy of neurons gradually disconnects the periphery of the brain in the form of muscles. What it does to a person can be represented by the example of the most famous patient of this scourge in our time: Stephen Hawking.

The principle of operation of Stentrode is relatively simple. The sensor reads the signals activated by the patient’s brain and transmits it to the device under the skin. The latter interprets the activity and sends a ready-made command via Bluetooth to a computer or smartphone. But how does the cursor move? Here lies some weakness of the current Stentrode model: the choice of a place to press and move the cursor is not carried out by nerve impulses of the brain directly, but by means of a fair “crutch”: an eye tracker, an eye tracking system like the one that was built into the famous Stephen Hawking chair and allowed him, in fact, to work on a computer and communicate online. Only without the neurointerface yet. In this way, the user can switch the tab in the browser, open the email application, type a text and send it to the selected recipient.

And still Synchron managed to get ahead of its main competitor: the ambitious Neuralink of Elon Musk, who promised to start experiments on human volunteers back in 2020, but to this day has not received permission from the FDA. And the procedure itself from Elon Musk’s project looks much bolder, due to a much more extensive system of sensors and electrodes, it is going to allow the patient much more than Stentrode, but at the same time — more brutal. Instead of a neat passage through the incision on the neck, the patient should cut out a part of the skull by means of a robot surgeon, and after a complex automated operation with direct connection of electrodes to the brain, replace it with an electronic unit.

However, this approach has been used for two decades by the Utah Array (BrainGate) technology from Cyberkinetics, which Musk himself calls “something like an instrument of torture.” This technology has already been developed so much that it is used in more than five hundred laboratories around the world. It allows patients to make rather complex movements with electronic limbs by means of electrodes embedded in the motor areas of the brain. It also provides the ability to move the cursor across the screen with an effort of thought, but requires a very complex neurosurgical operation and cumbersome equipment. Plus, it has quite negative consequences for the health of patients (the body rejects too large and roughly embedded electrodes and sensors. Channels overgrow and block contacts).

Synchron CEO Thomas Oxley holds the endovascular matrix of Stentrode electrodes during TED 2022

The company hopes that the devices of the Stantrade series will be able to become the first suitable for use at home, and not only in the walls of a specially equipped laboratory or ward, like BrainGate. And many surgeons will be able to perform it in almost ordinary hospitals, and not specially designed robots, like Elon Musk’s, or a whole team of specialists with the most complex equipment, as in the case of BrainGate.

Resources: https://habr.com/ru/company/ruvds/blog/679652/ https://en.wikipedia.org/wiki/Stephen_Hawking https://www.cnbc.com/2022/12/01/elon-musks-neuralink-makes-big-claims-but-experts-are-skeptical-.html

Tagged , ,