Author Archives: Matvei Mankevich

Jail time for cheating in games?

Reading Time: 2 minutes

Information technology and global automation pose new challenges in controlling social relations, especially through criminal law. This new phenomenon may constitute an unacceptably intrusive interference in computer games where a person gains an advantage in order to obtain some advantage, such as winning a game round or accumulating an excessive number of resources, which falls under the definition of the word “cheat.” However, the use of this concept within the framework of many countries’ criminal law may be problematic due to the limited definition of the term “Fraud” that simply does not concern the use of cheats in games.

Showing off my Photoshop skills

Luckily, there is an existing case of a country which criminal code has been adjusted to cover the modern variation of fraud. The Republic of Korea, which is arguably the heart of the online gaming and esports industries, has proclaimed that using “malware” to cheat at games is illegal and punishable by law. As a consequence, South Korea passed a law in 2020 to promote integrity in video games. For speculative activities involving game items, this law imposes a penalty of up to 5 years in jail or a maximum fine of 50 million Korean Won.

It should also be noted that this new legal practice is sporadic and not widely used today. This does not guarantee that other courts will also apply this practice, especially in the nearest future, considering how bureaucratic this process can be in some countries. But it’s also possible that new rules will be put in place that will ultimately make using malware in games illegal, which might result in it getting its own stipulation in the law.

References:
https://adda.royalcapitalbd.com/south-korea-video-game-cheating-law/

Spotify playlists exclusive to NFT holders

Reading Time: 2 minutes

Several NFT platforms are able to provide exclusive playlists that only token holders are able to access thanks to a new feature that streaming service Spotify is currently testing. Members of the Fluf, Moonbirds, Kingship, and Overlord NFT communities can use the tool, which supports the MetaMask, Trust Wallet, Rainbow, Ledger Live, and Zerion wallets. Only several regions currently offer this feature exclusively to Android users. However, it’s anticipated to be accessible everywhere shortly. 

Numerous decentralized music streaming services already operate, such as the streaming service Audius, which pays members with its own tokens for using the platform. According to the Overlord 3 web site, Creepz NFT Kingship, the project’s owner, put together a playlist titled “Invasion” that includes many viral artists such as Snoop Dogg, Queen, etc.

Another Universal Music Group-owned NFT platform made its own playlist for the Spotify feature. It is intended that playlist will be updated frequently and that the songs will be more closely related to the NFT collections of the participating platforms. 

However, it seems like the general sense of this feature is missing. Other than justifying the purchase for NFT holders, there is no distinct reason for this feature to exist. Exclusive playlist may be suitable as a benefit of some community’s memebership but definitely go against the initial purpose of NFTs, which, as I recall, are more art-related. And this feature is for sure not going to lure people into buying more NFTs. I won’t be surprised if it turns out that Spotify weren’t the ones initiating the idea and just accepting a paycheck from those web3 NFT companies, as there are no visible reasons for Spotify to doing this unless they are launching one of their own.

Finally, it’s worth mentioning that NFT integration has previously been tested by Spotify. Some artists, including Steve Aoki and The Wombats, were allowed to advertise their NFT ventures on own artist cards launching in May 2022.

Source:
https://habr.com/ru/news/t/719032/

Grow yourself some electrodes

Reading Time: 2 minutes

Now you do not need to be Arnold Schwartznegger to become the mighty Terminator! Researchers from the colleges of Linköping, Lund, and Gothenburg have created an innovative gel that enables the creation of electrodes in living things without the need for implants.

20 малоизвестных фактов о фильмах «Терминатор», которые могут быть интересны

Leeches used as medicine and zebrafish were the research animals for the experiment. Electrodes are then injected into the fish’s brain, heart, and flukes, as well as all around a significant amount of neural tissue, after the gel has been injected.

The challenge of inserting pre-made chips into biological tissue is one of the issues that developers of various kinds of brain implants face. The use of biological elements and the direct growth of an electrode grid within cells are an alternative method and the ability to create human-machine hybrids will likely result from this. Researchers were able to create electrodes that are accepted by brain tissue and the immune system through making specific modifications to the chemical composition of the material.

Magnus Berggren, one of the authours, explains that the new invention allows you to actually grow electronics. And, most importantly, the organism itself is unaffected from this. Authors emphasize that humans have been working on creating devices that resemble biology for many years and we are now letting biology mold technology for us.

And it definitely seems like it, as we might enter a next level of mankind’s evolution, now beginning to integrate complicated circuitry directly into bodies. One of the possible use cases for it may be brain’s capabilities extension. Even though neurotech has been appearing in the media more often, especially with the help of Elon Musk’s Neurolink, we don’t see this topic getting as viral as its possible impact on the mankind.

Personally, it would be interesting to see if neurotech could be potentially combined with, for example, AR, connecting it directly to retina without the need to wear a headset. However, most likely, building integrated circuits in the human body will have a big effect on how neurological disorders are treated in the future. 

Source:

https://www.science.org/doi/10.1126/science.adc9998

Multilingual audio is now supported by YouTube

Reading Time: 2 minutes

YouTube launches video dubbing feature in different languages. YouTube’s multilingual voiceover technology will be built in, but creators will have to work with third-party voiceover providers. However, there is no information about particular ones that will be available.

Once the video is uploaded, viewers can select their preferred audio track from the same menu where they can currently adjust other options such as subtitles or audio quality. The content creator must indicate which other language they want to support.

YouTube has tested the feature with a small group of video contributors and the feature has been used in more than 3,500 videos in 40 languages. As of last month, more than 15% of viewers watched videos in their native language. In January alone, the volume of these views exceeded 2 million hours.

The well-known blogger Mr. Beast has previously worked to promote his material translated into other languages. He did this by making separate channels for dubbed videos. Some of these channels are incredibly popular, giving an opportunity to expand your audience and find new ways to make money. By displaying videos to a multilingual audience, the new feature is expected to make it simpler for a variety of creators to increase monetization income. This will be profitable for both the platform and the creators. 

Multilingual channels of MrBeast

The function is currently only enabled for long videos, but it is being tested in shorts. 

Content creators will be informed when the new function in Creator Studio is available. The function will initially be made available to only several thousand as a part of testing. When more authors will be able to use this choice on a regular basis is currently unknown.

It will be fascinating to observe how this feature spreads throughout YouTube and especially how it will be integrated into YouTube Shorts. However, it is not yet reported how dubbing will be used in the short videos. Presumably, with the development of YouTube’s embedded dubbing, we should expect AI-generated subtitles to be added later.

Sources:

https://techcrunch.com/2023/02/23/youtube-launches-a-multi-language-audio-feature-for-dubbing-videos-previously-tested-by-mr-beast/?utm_source=ixbtcom&guccounter=2&guce_referrer=aHR0cHM6Ly93d3cuaXhidC5jb20v&guce_referrer_sig=AQAAAAzSd3osCUVBFaOY9Kvak1yd6XzIGkSewzKT1H8WX-9yUxAdO3NFL3s7EsAPRHLtvCJUEzIsEfD8hPjjjIhTTyK-PU1GbhfNXVVdV_aua7VwqBmsvb9q_dAdiJ2nG59o8_wLY6-02O42PUG0ppZJRzQ_My3ZD5QaKtjqxu7le4qk

https://www.ixbt.com/news/2023/02/24/v-youtube-pojavilos-dublirovanie-na-raznyh-jazykah.html

The all new 2024 Mercedes E-class and its innovative tech

Reading Time: 2 minutes

The Mercedes E-class has been a golden standard of medium-level business sedans for a long-time inheriting innovation from its more superior ancestor Mercedes S-class and the newest model is not an exception. After introducing major changes in infotainment at W223 S-class in 2021, the new tech is announced to be in the 2024 E-class model.

An updated E-Class model for 2024 has been revealed by Mercedes-Benz. Owners will receive a massive touchscreen with built-in TikTok and Zoom applications as well as optional selfie cameras. 

A sizable center touchscreen and a second screen for the passenger are combined in the super screen. Previous models of the infotainment system used a variety of functions, but the MBUX system is built on a central frame. According to Mercedes, the new software-focused design enables flexible updates and much faster data transfers. 

2024 Mercedes-Benz E-Class Superscreen

Icons for control panels have been streamlined and styled to look like smartphone menus. Both Spatial Audio (multidimensional sound) and Dolby Atmos technologies are supported by the built-in speakers. When performing music, you can create adaptive lighting ambiance. If the Mercedes Level 3 Drive Pilot (their own automated lane maintenance technology, similar to the Tesla’s autopilot) is engaged, the video streaming is available on the main screen. TikTok, Zoom, Cisco Webex and other apps are supported. The Zync portal, which offers more than 30 various streaming services, will be able to be operated by the infotainment system.

The new model employs a number of safety steps to avoid driver distraction caused by the passenger display (Only when a human is detected by the sensors in the passenger seat is the touch feature turned on). On the main screen and at top of the driver’s dashboard, there are selfie cameras and video cameras mounted to track interior activity. Moreover, the driver’s eye motions are monitored by the cameras, which also records how frequently and how long the driver gazes at the passenger’s screen. Dual control technology will also be available to dim or completely obscure the driver’s view of the passenger device.

The AI helper no longer needs to be activated by saying “Hey Mercedes”. The new Just Speak feature allows drivers to issue orders verbally. In the event that the temperature falls below a predetermined level, car owners can turn on the heated seats and modify the internal lighting. By remembering the driver’s routines, AI will eventually be able to automatically generate commands for routine activities. 5G connectivity is also going to be available for users enriching the experience.

Sources:

https://www.cnet.com/roadshow/news/2024-mercedes-benz-e-class-superscreen-tiktok-zoom-apps/
https://edition.cnn.com/2023/02/22/business/mercedes-e-class-zoom-tiktok/index.html

 

What the Apple’s new AR/VR headset can be like?

Reading Time: 3 minutes

Apple this year is allegedly planning to enter a new product category, launching its first mixed reality headset. Rumors indicate that the upcoming headset will support both AR and VR technology, and that it will have features that will outshine competing products.

With the iPhone, iPad, and Apple Watch, Apple’s hardware and software led it to dominate those categories within a few short years after entering a new market, and it’s likely the same thing will happen with augmented and virtual reality.

4K Micro-OLED Displays

Apple is using two high-resolution 4K micro-OLED displays from Sony that are said to have up to 3,000 pixels per inch. Comparatively, Meta’s new top of the line Quest Pro has LCD displays, so Apple is going to be offering much more advanced display technology.

Micro-OLED displays are built directly onto chip wafers rather than a glass substrate, allowing for a thinner, smaller, and lighter display that’s also more power efficient compared to LCDs and other alternatives.

Apple’s design will block out peripheral light, and display quality will be adjusted for peripheral vision to cut down on the processing power necessary to run the device. Apple will be able to reduce graphical fidelity at the peripherals of the headset through the eye tracking functionality being implemented.

Apple Silicon Chip

Rumors suggest that Apple is going to use two Mac-level M2 processors for the AR/VR headset, which will give it more built-in compute power than competing products. Apple will use a high-end main processor and a lower-end processor that will manage the various sensors in the device.

More Than a Dozen Cameras

Apple is outfitting its AR/VR headset with more than a dozen cameras, which will capture motion to translate real world movement to virtual movement. It is said to have two downward-facing cameras to capture leg movement specifically, which will be a unique feature that will allow for more accurate motion tracking.

The cameras will be able to map the environment, detecting surfaces, edges, and dimensions in rooms with accuracy, as well as people and other objects. The cameras may also be able to do things like enhance small type, and they’ll be able to track body movements.

Iris Scanning

For privacy and security, the AR/VR headset will integrate an iris scanner that can read the pattern of the user’s eye, allowing for an iris scan to be used in lieu of a password and for payment authentication.

Iris scanning on the AR/VR headset will be akin to Face ID and Touch ID on the ‌iPhone‌, ‌iPad‌, and Mac. It could allow two people to use the same headset, and it is a feature that is not available on competing headsets like Meta’s new Quest Pro.

Thin and Light Design

Apple is aiming for comfort, and the AR/VR headset is rumored to be made from mesh fabric and aluminum, making it much lighter and thinner than other mixed reality headsets that are available on the market. Apple wants the weight to be around 200 grams, which is much lighter than the 722 gram Quest Pro from Meta.

Control Methods

3D sensing modules will detect hand gestures for control purposes, and there will be skin detection. Apple will allow for voice control and the AR/VR headset will support Siri like other Apple devices. Apple has tested a thimble-like device worn on the finger, but it is not yet clear what kind of input methods we’ll get with the new device.

Interchangeable Headbands

The mesh fabric behind the eyepieces will make the headset comfortable to wear, and it will have swappable Apple Watch-like headbands to choose from.

One headband is rumored to provide spatial audio like technology for a surround sound-like experience, while another provides extra battery life. It’s not clear if these will make it to launch, but headbands with different capabilities are definitely a possibility.

Facial Expression Tracking

The cameras in the AR/VR headset will be able to interpret facial expressions, translating them to virtual avatars. So if you smile or scowl in real life, your virtual avatar will make the same expression in various apps, similar to how the TrueDepth camera system works with Memoji and Animoji on the ‌iPhone‌ and ‌iPad‌.

The aforementioned facial expression detection would allow the headset to read facial expressions and features, matching that in real time for a lifelike chatting experience. Apple is working with media partners for content that can be watched in VR, and existing services like Apple TV+ and Apple Arcade are expected to integrate with the headset.

Independent Operation

With two Apple silicon chips inside, the headset will not need to rely on a connection to an ‌iPhone‌ or a Mac for power, and it will be able to function on its own.

Teams Premium is now available

Reading Time: 2 minutes

Microsoft has made Microsoft Teams Premium available in a public preview for users to test out its AI-powered features, including interactive translation, custom meeting branding, and advanced meeting protections. As an add-on for the Microsoft Teams conferencing service, Microsoft Teams Premium is designed to make meetings more personalized, smarter, and more secure.

Users can access the preview through the Microsoft 365 admin center and try out the simultaneous translation of 40 languages with captioning, custom meeting templates, and advanced webinar features, including registration for up to a thousand participants, interactivity, and reporting. The preview also includes features to protect sensitive content, such as watermarks and labels, and end-to-end encryption, which can be used to prevent users from recording meetings and copy/pasting chats.

While not all of the Teams Premium features announced in October will be immediately available to try during the free trial, custom branding for meetings, which allows organizations to set branded backgrounds and place corporate logos, will be available for trial in January. Other features, such as Smart Resume, a virtual assistant that shares meeting highlights and automatically generates chapters and ideas, have been announced as not yet available for the Microsoft Teams Premium December preview.

Users can check the availability of each feature on the Microsoft Tech Community Blog and can sign up for a mailing list for updates on the rollout of features for Teams Premium throughout December and January. At the end of the 30-day free trial, access to Teams Premium features will no longer be available. The product is scheduled for public release in early February 2023 and is expected to cost $10 per month.

In addition to the new features available in Microsoft Teams Premium, in November, Microsoft introduced a platform with a collection of online games for Teams users to play during meetings. The company believes that these games will help users relax, have a good time, and improve team spirit.

Uber Eats is deploying sidewalk robots

Reading Time: 2 minutes

Uber is set to deploy a new fleet of delivery robots on the streets of Miami, Florida. The six-wheeled robots, manufactured by auto supplier Magna, have been developed by California-based AI firm Cartken. The company, founded by ex-Google engineers, is known for deploying its robots on college campuses. The robots will deliver items offered by businesses in the Dadeland area of Miami-Dade County, with plans to expand into other areas of the city and other markets in 2023. They have a range of several miles and can carry up to two dozen pounds of cargo. They are also equipped with cameras that can identify obstacles and help them navigate.

The deployment of autonomous delivery robots is becoming increasingly common, with many being seen on college campuses and in some towns and cities. However, there have been instances of robots getting stuck in snow, being run over by cars, or even catching fire.

Uber has been expanding its use of autonomous vehicles for ride-hailing trips and deliveries. The company has a 10-year agreement with Nuro to use its delivery vehicles in California and Texas, and is working with Serve Robotics and Motional on a robot delivery pilot in Los Angeles. It is also featuring Motional’s robotaxis for ride-hail customers on its app in Las Vegas.

In the past, Uber had developed its own fleet of autonomous vehicles with the aim of eventually replacing all of its human drivers. However, the program was terminated after a woman was killed by one of the company’s vehicles in 2017.

The MIT’s new Image2LEGO neural network will surprise you

Reading Time: 2 minutes

We all love LEGO, a truly generation-free entertainment that loads of people are obsessed about despite age.Although LEGO sets have entertained generations of children and adults, the challenge of designing customized builds matching the complexity of real-world or imagined scenes remains too great for the average enthusiast.

Researchers at the Massachusetts Institute of Technology (MIT) have developed an Image2Lego neural network algorithm that builds instructions for building a 3D LEGO model from a 2D image.

The process goes as following:

1. The user uploads a regular image: for example, with an airplane.

2. The algorithm recognizes the plane in the photo and uploads it to the Image2Lego neural network.

3. Trained Image2Lego transforms a 2D picture into a 3D model of an airplane using neural networks and shows how it should look if assembled from LEGO bricks.

4. The algorithm creates instructions for assembling the model and tells you what parts will be needed for this.

Here is how the researches described the objective: “We design a novel solution to this problem that uses an octree-structured autoencoder trained on 3D voxelized models to obtain a feasible latent representation for model reconstruction, and a separate network trained to predict this latent representation from 2D images.”

In the past we’ve already seen implementations of AI in LEGO assembling. For example, BrickIt allows a user to lay out all their LEGO pieces flat and scan with a mobile. The image then is processed by computer vision to detect available bricks suggesting possible models for assembling, both genuine and custom. However, the MIT’s neural network is quite opposite to what the BrickIt app offers and allows more usage scenarios.

Assembling a simple model like a tower or a car is just too banal. Image2Lego is capable of processing a face image to create detailed instructions for assembling a detailed 3D face. Trying it out for building a model of your own face must be a unique thing.

Here is Image2Lego’s instructions for assembling actor Chris Pratt’s face:

HealthKit data to be used for generating Spotify playlists

Reading Time: 2 minutes

In the near future, the popular streaming service Spotify will integrate with HealthKit, a software platform developed by Apple for iOS devices.

This integration will allow Spotify to collect data about a user’s workouts, including information such as duration, heart rate, and the number of calories burned. According to a developer who discovered functions in the Spotify code related to Apple’s API, this data will be used by Apple to create an automated workout playlist system based on the user’s activity level and preferences.

For example, if a person is active and frequently goes running, Spotify may include more energetic music in their playlist, while someone who prefers meditation and stretching may receive a playlist with more soothing music. It’s worth noting that Spotify will not be able to collect user data without their permission, as any application that utilizes the Apple API must be approved by the user, and the user can also opt to stop the integration at any time.

The piece of code used for calling the HealthKit API

Over the past couple of years we have already seen Spotify developing experimental playlist creation features. The most recent one is designed to mirror a users’ current mood by getting various data from them like a favorite color or a shape of choice. Another attempt is the AI-driven “Expand playlist” feature that helps with finding similar songs to the ones in a playlist. Both of them have proven themselves as very solid and practical from those looking to broaden their musical perspective.

It is currently unclear when this integration will be released, or what specific medical data Spotify plans to collect. The Spotify press office has not yet commented on the matter. Most probably, this is a Spotify’s response to new features like the karaoke mode being added to Apple Music, their biggest competitor.