Author Archives: Nela Wawryniuk

Quantum computing – overview

Reading Time: 4 minutes

Quantum computing is an exciting new field that is rapidly gaining attention from scientists and researchers around the world. Unlike classical computers, which use binary digits (bits) to represent information, quantum computers use quantum bits, or qubits, which can be made from quantum mechanical systems with two states (I’ll explain that in a moment). For example, the spin of electrons can be measured up or down, or individual photons are polarized vertically or horizontally. This new type of computing promises to revolutionize the way we process information, making it possible to solve problems that are currently impossible for classical computers to solve.

The fundamental difference between classical computers and quantum computers is the way they store and manipulate information. In classical computers, information is stored in binary form, as either a 0 or a 1. In quantum ones, information is stored as quantum states, which are superpositions of 0s and 1s. This means that a quantum bit can be both a 0 and a 1 at the same time, making them much more powerful than classical computers. “Let’s look at example that shows how quantum computers can succeed where classical computers fail:

A supercomputer might be great at difficult tasks like sorting through a big database of protein sequences, but it will struggle to see the subtle patterns in that data that determine how those proteins behave.

Proteins are long strings of amino acids that become useful biological machines when they fold into complex shapes. Figuring out how proteins will fold is a problem with important implications for biology and medicine.

A classical supercomputer might try to fold a protein with brute force, leveraging its many processors to check every possible way of bending the chemical chain before arriving at an answer. As the protein sequences get longer and more complex, the supercomputer stalls. A chain of 100 amino acids could theoretically fold in any one of many trillions of ways. No computer has the working memory to handle all the possible combinations of individual folds.

Quantum algorithms take a new approach to these sorts of complex problems — creating multidimensional spaces where the patterns linking individual data points emerge. In the case of a protein folding problem, that pattern might be the combination of folds requiring the least energy to produce. That combination of folds is the solution to the problem. (IBM, “What is quantum computing?”)

Quantum computers have the potential to solve complex problems much faster than classical ones. For example, they can quickly find the prime factors of large numbers, which is a critical operation in cryptography. This makes quantum computing a potential game-changer in the field of cryptography, as they could break encryption codes that are currently considered unbreakable.

An interesting application of this computing type is in the field of machine learning. They can perform machine learning algorithms faster and more accurately than classical computers, which could lead to significant advancements in fields like speech recognition, natural language processing, and computer vision.

They can be used also to study complex chemical reactions, which could lead to the development of new medicines and materials. Simulating these reactions, can help researchers understand how these processes work and how they can be improved.

Despite the exciting potential, quantum computing is still a developing field, thus there are also some challenges that need to be addressed. One of the biggest challenges is maintaining the coherence of quantum states, which is necessary for quantum computers to function correctly. This is because these states are highly sensitive to their environment, external noise and even small perturbations can cause the states to collapse. Furthermore, their complexity makes them difficult to design, build and maintain. Currently, there are only a few them available and the hardware is expensive and difficult to access. Another challenge is the lack of scalability, meaning that they can only solve very specific problems and are not yet capable of general-purpose computing. The number of available quantum algorithms is also limited, and developing new algorithms is a challenging and ongoing process. Additionally, because quantum computers are very expensive to build and maintain, limiting their accessibility to only well-funded research institutions and large corporations.

What do we have to focus on in this area? The development of algorithms that can be run on quantum computers!

While there have been some advances in this area, there is still much work to be done to develop algorithms that can take full advantage of the power of quantum computing.

We can all agree that despite all challenges that this relatively new technology has to overcome, its future looks bright. With continued research and development, it is likely that we will see significant advancements in this field in the coming years. From cryptography to machine learning, it has the potential to revolutionize the way we process information and solve problems. It is definitely a very exciting field with the potential of changing the world as we know it. As research continues, we can expect to see more and more applications of it in fields like cryptography, machine learning, and chemistry.

Chat GPT – HOT OR NOT?

Reading Time: 4 minutes

Chat GPT – one of the hottest subjects of last days. It is already used by students, programmers and people across various fields. This artificial intelligence language model developed by OpenAI, was designed to provide human-like text-based conversations. It has gained significant popularity in recent years and is used in various applications such as customer service, language translation, and content creation. The growing interest raises important questions about the advantages and disadvantages of this technology, as well as the latest developments in the field. So, what can we say about them?

One of the most notable advantages of Chat GPT is its ability to provide human-like conversations. The model is trained on a vast amount of text data, which gives it the ability to understand and respond to a wide range of topics and questions. This makes it suitable for use in conversational interfaces, where a human-like response is desired. For example, it can be used to provide customer service through chatbots, or as a language translator for people speaking different languages.

What can we say about its versatility? The model can be trained to perform a wide range of language-related tasks, including question answering, summarization, and text generation. The mentioned versatility makes this AI model useful for a variety of applications, including content creation, language learning, and research. For example, it can be used to generate articles, create personalized learning experiences, or even help researchers study language patterns and behaviors.

Furthermore, it has the ability to continuously improve over time. As more data is fed into the model, it continues to learn and become even more human-like in its responses. This means that the model will become more accurate and capable over time, providing a valuable resource for those using it for various applications.

Moreover, a very exciting development in the field of Chat GPT is the use of the technology in the creative industries. For example, the model has been used to generate music, poetry, and even visual art. This opens up new possibilities for artists and creatives, and it will be interesting to see how the technology evolves and is used in the future.

While it has many advantages, there are also some significant disadvantages to consider. One of the most pressing concerns is the potential for bias in the model. Like any machine learning model, Chat GPT may be susceptible to bias, particularly in its training data. For example, if the training data contains gender or racial stereotypes, the model may reproduce these biases in its responses. This could lead to harmful consequences, such as perpetuating harmful stereotypes or making inaccurate predictions.

Many people are also cocerned about its impact on education. Are students going to be able to think on their own? Is this AI model going to influence their writing skills? Well, these are pretty good questions. Like many other things Chat GPT is only a tool which we can use to increase our efficiency, save time and reach out to in times of creative crisis (who doesn’t hate that?). Maybe we should learn how to use this tool in a way which increases our skills and is not harmful? “Ethan Mollick, an entrepreneurship and innovation professor at the University of Pennsylvania’s Wharton School, told NPR on Thursday that he now requires his students to use ChatGPT to help with their classwork. “This is a tool that’s useful,” Mollick said during the NPR interview. “There’s a lot of positives about it. That doesn’t minimize the fact that cheating and negativity are there, but those have been there for a long time.”  His new AI policy — which NPR reviewed — calls AI usage an “emerging skill.” The policy also states that students must check ChatGPT’s responses and will be held accountable for any inaccuracies that the bot spits out.” (businessinsider.com, Aaron Mok,”A Wharton business school professor is requiring his students to use ChatGPT”). Of course everything is only as useful as we make it. We cannot assume that something is 100% good or bad. It all depends on the way we use it.

What about the limitations? Although the model can provide human-like responses, it may still lack the context and emotional intelligence that a human would have in a conversation. For example, the model may struggle to understand sarcasm or humor, or to respond appropriately to emotionally charged situations. This lack of emotional intelligence can result in awkward or inappropriate responses, which may hinder the effectiveness of the model in certain applications.

Finally, there is a risk of the model being misused for malicious purposes. That’s right. It can be used to spread false information or impersonate people online, which could have serious consequences for individuals and society as a whole. This highlights the need for responsible use of Chat GPT, as well as for ongoing monitoring and regulation of the technology.

Despite these disadvantages, it is an exciting and rapidly developing field, with numerous recent developments worth mentioning. For example, OpenAI recently released a new version, called GPT-3, which has significantly improved language capabilities and has been integrated into various products and services. GPT-3 has received widespread attention, with many experts hailing it as a breakthrough in AI technology.

An exciting development in this field is also the recent release of a model that can generate realistic images. This model can be combined with Chat GPT to create photo-realistic fake news and deepfakes, which could have significant implications for politics, media, and society as a whole. This highlights the need for ongoing research and development in this area.

In the final analysis, it is a powerful and versatile AI model that has many potential applications. From customer service to content creation, the model provides businesses with new opportunities to engage with customers and process large amounts of text data. However, there are also several disadvantages to consider, such as bias and the potential for malicious use. As the technology continues to evolve and improve, it is important that businesses use it responsibly and in accordance with ethical guidelines. The hot news surrounding Chat GPT highlight the exciting potential of this technology and its impact on various industries.

Tech predictions for the New Year – trends and threats

Reading Time: 3 minutes

First, let’s talk about cybersecurity. Cybersixgill in the report regarding their predictions of its’ future stated: “Cybercrime is increasingly lucrative,[…]we expect a record-breaking year of cyber security breach notifications not only because of the sophistication of thread actors but also due to larger changes in the world global unrest supply chain instability and soaring inflation will impact an organizations ability to mitigate remediate or prevent a problem.[…]AI will play a huge role both in cyber threat intelligence and in fighting cybercrime.”

Cybersecurity expert Dov Lerner said that “While the technology has been developing for a while , the difference now is that it’s more accessible. Advanced AI capabilities that were only in the hands of a few governments and researchers became mainstream in 2022 – think DALL-E, Stable Diffusion and chatGPT.  We think cybersecurity will go in this direction in 2023 as AI capabilities became available for both attackers and defenders.”

In terms of trends Mann predicts a lot of changes in analytics. It is a very dynamically developing and changing area. Its’ future holds a lot of exciting opportunities as well as threats. They mention that the 4 major trends in 2023 will be: “The rise of low-code and no-code automated machine learning (AutoML), enhanced digital twin technologies, industrial adoption of computer vision and a blurring of the lines between edge and cloud.”” Mann says we’ll also see more purpose-built digital-twin applications in 2023 specialized for defined use cases in energy, infrastructure optimization and industrial manufacturing sectors. Organizations are also expected to increasingly adopt CV and other AI technologies, with the kinds of industries harnessing these technologies expanding beyond more niche use cases by IT staff and data scientists. According to Mann, CV initiatives will focus on “yield improvement, operational efficiency and safety. Finally, with cloud hyperscalers like Microsoft Azure, Amazon Web Services and Google Cloud Platform starting to roll out core cloud services on the edge, edge computing will become an extension of cloud computing. Workloads will be distributed intelligently across hybrid environments. This will mean quicker adoption of IoT analytics at the edge in 2023 to enhance decision making at the source. […]These trends don’t mark a departure from previous years, but rather a continuation of market trajectories following the pandemic.”(“IoT World Today)

Some of the tech trends were presented by by Koenig is the CES convention (Consumer electronics show) organized by the Consumer Technology Association – one of the biggest tech events in the world. According to “IoT World Today” “With the potential of a 2023 recession, Koenig suggested that the market can expect to see four enterprise tech innovations: connected intelligence, autonomous systems, quantum computing and 5G industrial IoT applications.

The next generation of online experiences, the metaverse of things (MoT) and the technology innovations fueling it, were highlighted as digital twins, virtual spaces, shared experiences and virtual scenarios.

Koenig said highlights of transportation, one of the major themes of this CES, were the advancements of autonomous systems, the transformation of the in-vehicle experience and the evolution of the electrification ecosystem.

In-vehicle enhancements coming were noted as screenification, voice control, retail and entertainment services and features as a service (FaaS) models.

In health services, Koenig sees new frontiers in innovation, including anytime virtual visits, remote patient monitoring, fitness and wellness platforms and access to online pharmacies.

Koenig also spent some time detailing farming of the future, which is to include intelligent silos, drones and soil sensors, farming robots, connected farmers and satellite 5G connections.

One major farming innovation was the John Deere fully autonomous tractor we wrote about at the last CES. That tractor is the CES Innovation Award winner in robotics this year.”

What is Lensa and why has it recently become so popular?

Reading Time: 3 minutes
How This AI Went Viral, And Also Controversial, With Its ‘Magic Avatars’ Feature | Eyerys

Lensa is a photo editor created in 2018 by Prisma Labs. It is currently one of the most frequently downloaded free apps on App store. Although it is available only for iPhone users, it is one of the most popular apps of the second half of 2022. It is also considered one of the milestones of digital art.

But what is the reason of its’ huge popularity? What has captured users’ attention? ‘

For one it has recently started offering the “magic avatar” feature. It is based on AI, which generates fantasy avatars of users and displays them in fantasy, comic, cartoonish style. It is fun, with a little dose of mystery. Who doesn’t love that? That possibility has helped its’ developers to enter new areas of the market and target new social groups. Nearly every Instagram influencer has tried it out, most of them even posted it on their social  media. Needless to say that’s currently the most popular and effective way of advertising.

How does it work?

“In order to get your own AI-generated portraits, you have to download the Lensa app and upload between 10 and 20 photographs. The app specifies that photos should be individual shots of the same person. Close-up selfies that show facial features, a variety of backgrounds, facial expressions and head tilts are ideal.

Users are asked to specify gender — female, male or other — which impacts what kind of prompts the AI generator is fed and which archived images it pulls from its own database to create all-new versions of your likeness.

The process takes 40 minutes on average to produce initial images, but depending on how many people are using the app, it can take a lot longer. The app does track how long you have to wait and notifies you when your photos are ready.”( Sofia Misenheimer, mtlblog.com)

Of course the company had to face a fair dose of challenges too. Many users, especially women have been complaining that the displayed images were very sexual. Since the AI database is full of all kinds of anime drawings and graphics, it bases the outcomes on what their content is. And unfortunately often times it is sexual or has overly sexualized features.

Another obstacle that they had to overcome was the rage of graphic designers. Not only did they blame Lensa for taking away their clients and jobs. The database on which the  AI bases the outcome of its’ works, consists of graphic designs made by those same graphic designers without their permission. Some of them even state that many of the AI created images are very similar, if not the same, as their original ones. Company responded to those accusations in one tweet: “As cinema didn’t kill theatre and accounting software hasn’t eradicated the profession, AI won’t replace artists but can become a great assisting tool,”. Unfortunately, needless to say that statement didn’t calm the artists, who have already lost a huge part of their incomes due to the cheap and accessible work of Lensa app.

The discussion about the ethics of the Lensa policy and the possibility of the app running the artists out of business is ongoing, but it hasn’t affected the amount of its’ downloads.

Lensa app is still extremally popular and currently working on developing new, exciting features, which would interest users even more.

How did Big Tech companies achieve dominance?

Reading Time: 4 minutes

First of all, lets explain what exactly is Big Tech, otherwise known as Big five, Big Four or Tech Giants. It consists of Amazon, Google (or Alphabet), Meta, Apple and Microsoft. They are the leading, “dominant players in their respective areas of technology: artificial intelligence, e-commerce, online advertising, consumer electronics, cloud computing, computer software, media streaming, smart home, self-driving cars, and social networking. They are among the most valuable public companies globally, each having had a maximum market capitalization ranging from around $1 trillion to above $3 trillion. They are also considered among the most prestigious employers in the world, especially Google.” (Wikipedia, “Big Tech”).  But how did they achieve the dominance? What made them the giants they are today?

These companies were the forerunners in the industries they are currently prominent in. By adapting to the clients’ expectations, challenges created by the market they are operating on and constantly developing their technologies, they were able to take over. Extremely important factor was the fact that they focused not only on the technological development, but also took care of the industrial challenges and met clients expectations.

In case of Amazon developing by entering new markets and manufacturing new types of products proved to be very effective. According to the Congressional report (2020) “Amazon has a monopoly over merchants that sell their products through their services, mostly because they don’t have a viable alternative. Meanwhile Amazon has an incentive to use data from competing merchants to the advantage to its own goods and services.” This gives them the opportunity and advantage to effectively develop their own product range as well as control the market.

Apple on the other hand has a full control over which apps they allow their consumers to download on their devices. But what exactly does that mean? That means that software developers are 100% dependent on the companies’ decision. That makes them prone to overcharging. Furthermore, it gives Apple an inside on what services and apps are in the highest demand and create a competition.

Google is constantly squashing the competitors in order to achieve the dominance. According to the Reuters report they are demanding from their partners to put the Google search engine in the front and center on mobile devices. They also came up with various ways to generate revenue. “One of the primary ways Alphabet generates revenue through advertising is through its Google Ads program. Whenever a user searches for anything using Google’s search engine, an algorithm generates a list of search results. The algorithm attempts to provide the most relevant search result for the query as well as related suggested pages from a Google Ads advertiser.” (Investopedia, “How Google (Alphabet) Makes Money”). “With the ad piece in place to complement search, Google began to innovate in earnest. Some moves were obvious, such as Google publishing and acquiring digital assets that would deliver more ad-driven revenue as traffic grew and more ad space as content increased. These included YouTube (acquired 2006), Google Maps (2005), Google Blogger (2003), and Google Finance (2006).” (Investopedia, “Becoming a Digital Powerhouse”).

Founders of Facebook aimed for the total dominance from the early begging. “Within the first eight years, Facebook would hold one of the biggest initial public offerings in Internet history and hit a peak market capitalization of over $104 billion. And within 10 years, Facebook would announce it had 1.228 billion monthly active users across the globe. It’s now got 2.3 billion.”( “10 reasons why Facebook has been so successful” Maggie Tillman, 26 March 2021). They are constantly monitoring the competition and creating solutions which give them significant advantage. For instance, they have successfully overtook WhatsApp and Instagram and finally bought them out, as they considered them a threat to their long-term plans (Instagram for $1 billion in 2012 and WhatsApp in February 2014 for $19 billion).

Finally, Microsoft – they were extremally financially cautious very early on. Bill Gates – founder of the company made a plan to collect enough savings to get them through a whole year without any revenue. Another very important factor of their success is also the range of products, services and the amount of markets they  are operating on. The list is very long: from software: Azure, GitHub, Jscript, Microsoft BASIC, Microsoft Small Basic, Microsoft XNA, Silverlight, TypeScript, VBScript Microsoft Access, Microsoft Excel, Microsoft Lens, Microsoft OneNote, Microsoft Outlook, Microsoft PowerPoint, Microsoft Project, Microsoft Publisher, Skype for Business, Microsoft Sway; to hardware which consists of: computer hardware, gaming hardware and mobile hardware. They have also bought LinkedIn and Skype and  have a complete monopoly on PC operating systems. They were even sued by a competitor company – Netscape in 2002, “who has filed an antitrust lawsuit on the same grounds that were stated by the US Department of Justice during their investigations during the 1990s. Netscape alleged that Microsoft had abused its monopoly by forcing Windows users to use the inbuilt browser as opposed to other offerings including Navigator.”(“ Netscape vs Microsoft Antitrust Lawsuit, 2002”, lawteacher.net, 7th Aug 2019). Eventually Microsoft managed to exterminate Netscape and make their services obsolete.

As we can see, there is a visible pattern in the policy and work of these companies. Like it was said in the iconic movie “The wolf of Wall Street” “Supply and demand my friend”.  Big Tech companies have chosen the direction of their development early on, but they remembered to constantly adjust to the challenges and that granted them the monopoly in their respective fields.

Can AI robots influence our day to day lives?

Reading Time: 3 minutes

Emerging AI technologies are changing our lives already. They proved to be useful in various industries. Their main purpose is to increase the efficiency of work in companies and decrease the margin of error in human work, which is a big problem many of them have to face daily.

Speech recognition is one of the most popular examples of AI technologies. It significantly improved the way we write articles, papers or simply look things up online and shortened the time of doing research. Nearly every iPhone user knows how to use this virtual assistant or at least what it is. It can be helpful in various aspects of life: navigation, smart home, everyday tasks, research, music, podcasts and so on. Moreover, Apple’s privacy policy states that it “keeps users’ information private and secure — whatever he asks Siri is not associated with his Apple ID. The power of the Apple Neural Engine ensures that the audio of users requests never leaves his iPhone or iPad unless he chooses to share it.”

Another extremely useful solution provided by the AI technologies is face recognition. It is used mainly by phone companies as a another, “fun” way in which customer can unlock their device. Although it proved to be quite essential in social media too. What has face recognition to do with social media, you might ask. The answer is actually quite simple. Millions of Instagram, TikTok and Facebook influencers became famous mainly because of their videos with funny, scary or beautifying filters. That might sound a little bit immature or childish, but that industry generates in fact millions of dollars monthly.

A few years ago, a team from UC Berkeley and Carnegie Mellon University started working on a stumble-proof robot that would adapt to challenging, difficult terrain in real time. “The system was trained entirely in simulation, in a virtual version of the real world where the robot’s small brain (everything runs locally on the on-board limited compute unit) learned to maximize forward motion with minimum energy and avoid falling by immediately observing and responding to data coming in from its (virtual) joints, accelerometers and other physical sensors.”(Devin Coldewey, TechCrunch, July 9, 2021). The possibilities of this technology are endless if we would be able to apply it to our daily work or research. Robots like these would be able to explore hard-to-reach areas and make human work so much easier and more efficient.

But what if one machine could have all these qualities? What if it could speak, understand human speech, recognize faces and move, while constantly readapting to the changing environment? That’s an extremely exciting perspective. Teams from “Boston Dynamics” – robotics and engineering design company from Massachusetts and automotive company owned by Elon Musk- “Tesla” are both currently working on humanoids, which would have all of the above. If the projects are successful, it could change our daily lives forever.

Atlas – because that is the name of the Boston Dynamics robot – “was initially designed for search and rescue tasks and unveiled to the public in 2013. The robot stands about 5 feet (1.5 meters) tall and weighs about 190 pounds (86 kilograms). It is battery-powered and hydraulically actuated with 28 degrees of freedom. […] It can adapt behaviors based on what it sees. That means engineers do not have to pre-program jumping motions for all the platforms and gaps the robot might encounter.” (Leslie Katz, CNET.com, “See Boston Dynamics Atlas robots work a parkour course like it’s nothing”). Tesla’s robot – “Optimus”, although not that impressive in terms of athletics, looks much more humanlike.

Robots with all the human abilities would be able to replace people in many industries. Moreover, since their work and actions are repetitive, there would be less mistakes and misunderstandings than in human work.

Robot “Optimus”

What is edge computing and why is it crucial to the development of IoT?

Reading Time: 4 minutes
Computer Codes · Free Stock Photo (pexels.com)

 For several years now companies using or basing their services on IoT have been depending on the solutions provided by the cloud. Although the market is constantly changing and costumers are always expecting high performance products. As Aleksander Poniewierski, Global Digital and Emerging Technology Leader mentions in his book “SPEED no limits in the digital era”(p.179) “the synergy of emerging technologies shapes habits attitudes and expectations. We are used to not having to queue at a bank window anymore. We make transfers or withdrawals via mobile phone applications. And it takes a few seconds. […] Time saving and convenience are addictive and feeding these needs is one of the ways that technology is changing the world.” In order to provide that, exchange of data has to be very quick and time of response – immediate.

But what exactly is edge computing and how does it differ from the cloud computing? Until now data processing and storing was taking place in the Cloud Data Centers. This model was sufficient until the devices of IoT created the need for faster and real-time processing. For example if a self-driving car had a delay in analyzing data, it could have tragic consequences. The solution for this problem is actually quite simple – creating local databases, storing and processing data close to its sources, edge devices. That’s exactly what edge computing was created for. According to authors of “An Overview on Edge Computing Research” Keyan Cao,  Yefan Liu , Gongjie Meng and Qimeng Sun, “it stores and processes data at the edge of the network. It has proximity and location awareness, and provides users with near-end services. In terms of data processing, it is faster, real-time, and secure. It can also solve the problem of excessive energy consumption in cloud computing, reduce costs, and reduce the pressure of network bandwidth. Edge computing is applied in various fields such as production, energy, smart home, and transportation.”

  As we know, in order to work efficiently, services based on IoT have to collect and process an enormous amount of data. Cloud computing can prove to be quite expensive and even not economically, financially profitable in that matter, although it works better in terms of complexed analytics and advanced visualizations, while edge computing enables us to do many operations instantly. But these two concepts can work together, creating many exciting possibilities. For example “technicians working on a remote wind turbine use edge computing to view basic data and analytic information in the field. The essential data needed to diagnose the turbine is more efficiently delivered in the field without having to rely on patchy cellular communications with a cloud based solution. Processing power is provided at the data source or ‘edge’ via standard PC hardware or other IIoT gateway devices. The centralized cloud platform is still utilized for more resource-intensive analytics, stored business logic and data warehousing”(Open Automation Software Blog). Both paradigms are very important for the miniaturization of IoT devices, they let us use only sensors and necessary mechanical support, while all computing power comes from the third party.

An interesting example of usage of this computing paradigm is also a mobile game “Pokémon Go” developed by Niantic in collaboration with Nintendo. Players of this augmented reality game have been facing problems regarding its performance, mostly in terms of the time of response. Games that are claiming to provide a real-time experience are very delay-sensitive. As a accomplished IT trade press writer Paul Desmont describes it “requires a constant back and forth of data between the user and the servers supporting the game. That includes location information from dozens if not hundreds or thousands of users in close proximity, messages back to them that prompt the virtual images to pop up on their phones, data on how many Pokémon each one catches and more.” (“Stellar Growth of Pokémon GO Highlights Need for Edge Data Centers”, Schneider Electric Blog). Edge computing provided an answer to this problem. It enabled the founders of the game to storage and analyze data closer to its users, without the need of exchanging all of it with the Cloud Data Center, which made the response time much shorter.

Furthermore, nowadays many companies are leaning towards the sustainable development model. What does that mean and why is edge computing an integral part of it? In simple words sustainability requires acknowledging that work of natural eco-systems and development of human-created technologies have to be integral. A perfect example of a company trying to work in that model is Fujitsu. As we can read on their website, their philosophy is to “recognize that global environmental protection is a vital business issue. By utilizing their technological expertise and creative talents in the ICT industry, they seek to contribute to the promotion of sustainable development. In addition, while observing all environmental regulations in their business operations, they are actively pursuing environmental protection activities on their own initiative. Through their individual and collective actions, they will continuously strive to safeguard a rich natural environment for future generations” (President of Fujitsu Limited, April 2011). In order to create a technology, that would be considered as a Green IT, we have to lower the required amount of energy needed for analyzing and storing data. Data centers are no longer sufficient in these cases. Edge computing provides us with the possibility to save energy, time and avoid data traffics.

Moreover people are getting more aware of processing their data and the value of their privacy. Because of that, governments all over the world are creating new restrictions regarding protection, processing and usage of data. European countries came up with the GDPR (General Data Protection Regulation), which according to the gdpr.eu “is the toughest privacy and security law in the world. Though it was drafted and passed by the European Union (EU), it imposes obligations onto organizations anywhere, so long as they target or collect data related to people in the EU.” “ Being able to process data at source helps us to abate the amount of transfers and devices needed in the process, which significantly improves the security. “Edge computing in the smart home has the potential to give control of personal data back to consumers – one of the primary goals of GDPR. By integrating edge capabilities into their core services, providers of smart home accessories offer users control of the data, whether they transmit it to the cloud or store and process it locally”(Machnation Blog, “Edge computing helps organizations meet GDPR compliance”).

As you can see edge computing is currently one of the most efficient ways to develop the near-end technology based industries, it provides solutions to various problems companies have been facing regarding both the speed of the data flow and profitability, enabling them to run their businesses in the sustainable development model.