Thursday, 22 February 2024
WorldTop technology risks we will face by 2040

Top technology risks we will face by 2040

Lancaster. The technology and accessibility of computer systems is changing at an astonishingly rapid rate. There have been exciting advances in artificial intelligence, the clustering of small interconnected devices we call the “Internet of Things,” and wireless connectivity. Unfortunately, these improvements bring with them benefits as well as potential dangers. To have a secure future we need to anticipate what might happen in computing and address it quickly. So, what do experts think will happen, and what can we do to prevent bigger problems? To answer this question, our research team from the Universities of Lancaster and Manchester turned to the science of seeing the future, called “forecasting”.

No one can predict the future, but we can make forecasts, which means descriptions of what may happen based on current trends. Indeed, long-term forecasts of technology trends can prove remarkably accurate. And an excellent way to get a forecast is to combine the views of several different experts to find out where they agree. We consulted 12 expert “futurists” for a new research paper. These are people whose role involves making long-term forecasts on the effects of changes in computer technology up to the year 2040. Using a technique called a Delphi study, we combined futurists’ forecasts into a set of risks, along with their recommendations for addressing those risks.

Software Concerns
Experts predict rapid advances in artificial intelligence (AI) and connected systems will lead to a world far more computer-driven than it is today. Surprisingly, they expected little impact from two much-hyped innovations. The first blockchain, which is a way of recording information and making it impossible or difficult to manipulate the system, suggested that it is irrelevant to today’s problems; And secondly quantum computing is still in its early stages and may have little impact in the next 15 years. Futurists highlighted three major risks associated with the development of computer software, which are as follows. AI competition is causing trouble Our experts suggested that many countries’ stance on AI as an area where they seek to gain a competitive, technological edge will encourage software developers to take risks in the use of AI . This, combined with AI’s complexity and ability to surpass human capabilities, can lead to disasters. For example, imagine that a shortcut in testing causes an error in the control systems of cars manufactured after 2025 that goes unnoticed amid all the complex programming of the AI. It can also be linked to a specific date, whereby a large number of cars start behaving erratically at the same time, causing many deaths around the world.

Generative AI
Generative AI could make it impossible to determine the truth. Over the years, photos and videos have been much harder to fake, and so we expect them to be real. Generative AI has already fundamentally changed this situation. We hope to improve its ability to generate credible fake media, so it becomes increasingly difficult to tell whether an image or video is real or not. Let’s say someone in a position of trust – a respected leader, or a celebrity – uses social media to showcase genuine content, but sometimes it also includes fake content. For those who follow them, there is no way to determine the difference – it would be impossible to know the truth.

invisible cyber attacks
Finally, the sheer complexity of the systems that will be built – networks of systems owned by different organizations, all dependent on each other – will have unexpected consequences. It will be difficult, if not impossible, to get to the root of why things go wrong. Imagine a cybercriminal hacking an app used to control appliances like an oven or fridge, causing all the appliances to turn on simultaneously. This increases the demand for electricity on the grid, leading to major power cuts. It would also be challenging for power company experts to identify which devices caused the spike, since all are controlled by the same app. Cyber ​​sabotage will become invisible and impossible to distinguish from normal problems.

software jujitsu
The purpose of such forecasts is not to create concern, but to help us begin to solve problems. Perhaps the simplest suggestion made by experts is a type of software jujitsu that uses software to protect and defend oneself. We can force a computer program to perform its own security audit by creating additional code that validates the program’s output – effectively, code that checks itself. Likewise, we can emphasize that the methods already used to ensure safe software operation will continue to be applied to new technologies. And the newness of these systems is not used as an excuse to ignore good security practice.

strategic solution
But experts agreed that technical answers alone would not suffice. Instead, solutions will be found in the dialogue between humans and technology. We need to develop skills to deal with these human technology problems and new forms of education in different subjects. And governments need to establish security principles for their own AI procurement and legislate for AI security across the region, encouraging responsible development and deployment practices. These forecasts provide us with a variety of tools for solving potential future problems. Let us embrace those tools to realize the exciting promise of our technological future.

read this also:- You can pay money to bury your ashes on the moon, but should you do so?

Popular content

Latest article

More article