Archive for Programming – Page 3

AI is advancing even faster than sci-fi visionaries like Neal Stephenson imagined

By Rizwan Virk, Arizona State University 

Every time I read about another advance in AI technology, I feel like another figment of science fiction moves closer to reality.

Lately, I’ve been noticing eerie parallels to Neal Stephenson’s 1995 novel “The Diamond Age: Or, A Young Lady’s Illustrated Primer.”

“The Diamond Age” depicted a post-cyberpunk sectarian future, in which society is fragmented into tribes, called phyles. In this future world, sophisticated nanotechnology is ubiquitous, and a new type of AI is introduced.

Though inspired by MIT nanotech pioneer Eric Drexler and Nobel Prize winner Richard Feynman, the advanced nanotechnology depicted in the novel still remains out of reach. However, the AI that’s portrayed, particularly a teaching device called the Young Lady’s Illustrated Primer, isn’t only right in front of us; it also raises serious issues about the role of AI in labor, learning and human behavior.

In Stephenson’s novel, the Primer looks like a hardcover book, but each of its “pages” is really a screen display that can show animations and text, and it responds to its user in real time via AI. The book also has an audio component, which voices the characters and narrates stories being told by the device.

It was originally created for the young daughter of an aristocrat, but it accidentally falls into the hands of a girl named Nell who’s living on the streets of a futuristic Shanghai. The Primer provides Nell personalized emotional, social and intellectual support during her journey to adulthood, serving alternatively as an AI companion, a storyteller, a teacher and a surrogate parent.

The AI is able to weave fairy tales that help a younger Nell cope with past traumas, such as her abusive home and life on the streets. It educates her on everything from math to cryptography to martial arts. In a techno-futuristic homage to George Bernard Shaw’s 1913 play “Pygmalion,” the Primer goes so far as to teach Nell the proper social etiquette to be able to blend into neo-Victorian society, one of the prominent tribes in Stephenson’s balkanized world.

No need for ‘ractors’

Three recent developments in AI – in video games, wearable technology and education – reveal that building something like the Primer should no longer be considered the purview of science fiction.

In May 2025, the hit video game “Fortnite” introduced an AI version of Darth Vader, who speaks with the voice of the late James Earl Jones.

While it was popular among fans of the game, the Screen Actors Guild lodged a labor complaint with Epic Games, the creator of “Fortnite.” Even though Epic had received permission from the late actor’s estate, the Screen Actors Guild pointed out that actors could have been hired to voice the character, and the company – in refusing to alert the union and negotiate terms – violated existing labor agreements.

In “The Diamond Age,” while the Primer uses AI to generate the fairy tales that train Nell, for the voices of these archetypal characters, Stephenson concocted a low-tech solution: The characters are played by a network of what he termed “ractors” – real actors working in a studio who are contracted to perform and interact in real time with users.

The Darth Vader “Fortnite” character shows that a Primer built today wouldn’t need to use actors at all. It could rely almost entirely on AI voice generation and have real-time conversations, showing that today’s technology already exceeds Stephenson’s normally far-sighted vision.

Recording and guiding in real time

Synthesizing James Earl Jones’ voice in “Fortnite” wasn’t the only recent AI development heralding the arrival of Primer-like technology.

I recently witnessed a demonstration of wearable AI that records all of the wearer’s conversations. Their words are then sent to a server so they can be analyzed by AI, providing both summaries and suggestions to the user about future behavior.

Several startups are making these “always on” AI wearables. In an April 29, 2025, essay titled “I Recorded Everything I Said for Three Months. AI Has Replaced My Memory,” Wall Street Journal technology columnist Joanna Stern describes the experience of using this technology. She concedes that the assistants created useful summaries of her conversations and meetings, along with helpful to-do lists. However, they also recalled “every dumb, private and cringeworthy thing that came out of my mouth.”

AI wearable devices that continuously record the conversations of their users have recently hit the market.

These devices also create privacy issues. The people whom the user interacts with don’t always know they are being recorded, even as their words are also sent to a server for the AI to process them. To Stern, the technology’s potential for mass surveillance becomes readily apparent, presenting a “slightly terrifying glimpse of the future.”

Relying on AI engines such as ChatGPT, Claude and Google’s Gemini, the wearables work only with words, not images. Behavioral suggestions occur only after the fact. However, a key function of the Primer – coaching users in real time in the middle of any situation or social interaction – is the next logical step as the technology advances.

Education or social engineering?

In “The Diamond Age,” the Primer doesn’t simply weave interactive fairy tales for Nell. It also assumes the responsibility of educating her on everything from her ABCs when younger to the intricacies of cryptography and politics as she gets older.

It’s no secret that AI tools, such as ChatGPT, are now being widely used by both teachers and students.

Several recent studies have shown that AI may be more effective than humans at teaching computer science. One survey found that 85% of students said ChatGPT was more effective than a human tutor. And at least one college, Morehouse College in Atlanta, is introducing an AI teaching assistant for professors.

There are certainly advantages to AI tutors: Tutoring and college tuition can be exorbitantly expensive, and the technology can offer better access to education to people of all income levels.

Pulling together these latest AI advances – interactive avatars, behavioral guides, tutors – it’s easy to envision how an AI device like the Young Lady’s Illustrated Primer could be created in the near future. A young person might have a personalized AI character that accompanies them at all times. It can teach them about the world and offer up suggestions for how to act in certain situations. The AI could be tailored to a child’s personality, concocting stories that include AI versions of their favorite TV and movie characters.

But “The Diamond Age” offers a warning, too.

Toward the end of the novel, a version of the Primer is handed out to hundreds of thousands of young Chinese girls who, like Nell, didn’t have access to education or mentors. This leads to the education of the masses. But it also opens the door to large-scale social engineering, creating an army of Primer-raised martial arts experts, whom the AI then directs to act on behalf of “Princess Nell,” Nell’s fairy tale name.

It’s easy to see how this sort of large-scale social engineering could be used to target certain ideologies, crush dissent or build loyalty to a particular regime. The AI’s behavior could also be subject to the whims of the companies or individuals that created it. A ubiquitous, always-on, friendly AI could become the ultimate monitoring and reporting device. Think of a kinder, gentler face for Big Brother that people have trusted since childhood.

While large-scale deployment of a Primer-like AI could certainly make young people smarter and more efficient, it could also hamper one of the most important parts of education: teaching people to think for themselves.The Conversation

About the Author:

Rizwan Virk, Faculty Associate, PhD Candidate in Human and Social Dimensions of Science and Technology, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The hidden cost of convenience: How your data pulls in hundreds of billions of dollars for app and social media companies

By Kassem Fawaz, University of Wisconsin-Madison and Jack West, University of Wisconsin-Madison 

You wake up in the morning and, first thing, you open your weather app. You close that pesky ad that opens first and check the forecast. You like your weather app, which shows hourly weather forecasts for your location. And the app is free!

But do you know why it’s free? Look at the app’s privacy settings. You help keep it free by allowing it to collect your information, including:

  • What devices you use and their IP and Media Access Control addresses.
  • Information you provide when signing up, such as your name, email address and home address.
  • App settings, such as whether you choose Celsius or Fahrenheit.
  • Your interactions with the app, including what content you view and what ads you click.
  • Inferences based on your interactions with the app.
  • Your location at a given time, including, depending on your settings, continuous tracking.
  • What websites or apps that you interact with after you use the weather app.
  • Information you give to ad vendors.
  • Information gleaned by analytics vendors that analyze and optimize the app.

This type of data collection is standard fare. The app company can use this to customize ads and content. The more customized and personalized an ad is, the more money it generates for the app owner. The owner might also sell your data to other companies.

Screenshot from an android phone with the default opt-in selection radio button filled in
Many apps, including the weather channel app, send you targeted advertising and sell your personal data by default.
Jack West, CC BY-ND

You might also check a social media account like Instagram. The subtle price that you pay is, again, your data. Many “free” mobile apps gather information about you as you interact with them.

As an associate professor of electrical and computer engineering and a doctoral student in computer science, we follow the ways software collects information about people. Your data allows companies to learn about your habits and exploit them.

It’s no secret that social media and mobile applications collect information about you. Meta’s business model depends on it. The company, which operates Facebook, Instagram and WhatsApp, is worth US$1.48 trillion. Just under 98% of its profits come from advertising, which leverages user data from more than 7 billion monthly users.

What your data is worth

Before mobile phones gained apps and social media became ubiquitous, companies conducted large-scale demographic surveys to assess how well a product performed and to get information about the best places to sell it. They used the information to create coarsely targeted ads that they placed on billboards, print ads and TV spots.

Mobile apps and social media platforms now let companies gather much more fine-grained information about people at a lower cost. Through apps and social media, people willingly trade personal information for convenience. In 2007 – a year after the introduction of targeted ads – Facebook made over $153 million, triple the previous year’s revenue. In the past 17 years, that number has increased by more than 1,000 times.

Five ways to leave your data

App and social media companies collect your data in many ways. Meta is a representative case. The company’s privacy policy highlights five ways it gathers your data:

First, it collects the profile information you fill in. Second, it collects the actions you take on its social media platforms. Third, it collects the people you follow and friend. Fourth, it keeps track of each phone, tablet and computer you use to access its platforms. And fifth, it collects information about how you interact with apps that corporate partners connect to its platforms. Many apps and social media platforms follow similar privacy practices.

Your data and activity

When you create an account on an app or social media platform, you provide the company that owns it with information like your age, birth date, identified sex, location and workplace. In the early years of Facebook, selling profile information to advertisers was that company’s main source of revenue. This information is valuable because it allows advertisers to target specific demographics like age, identified gender and location.

And once you start using an app or social media platform, the company behind it can collect data about how you use the app or social media. Social media keeps you engaged as you interact with other people’s posts by liking, commenting or sharing them. Meanwhile, the social media company gains information about what content you view and how you communicate with other people.

Advertisers can find out how much time you spent reading a Facebook post or that you spent a few more seconds on a particular TikTok video. This activity information tells advertisers about your interests. Modern algorithms can quickly pick up subtleties and automatically change the content to engage you in a sponsored post, a targeted advertisement or general content.

Your devices and applications

Companies can also note what devices, including mobile phones, tablets and computers, you use to access their apps and social media platforms. This shows advertisers your brand loyalty, how old your devices are and how much they’re worth.

Because mobile devices travel with you, they have access to information about where you’re going, what you’re doing and who you’re near. In a lawsuit against Kochava Inc., the Federal Trade Commission called out the company for selling customer geolocation data in August 2022, shortly after Roe v Wade was overruled. The company’s customers, including people who had abortions after the ruling was overturned, often didn’t know that data tracking their movements was being collected, according to the commission. The FTC alleged that the data could be used to identify households.

Kochava has denied the FTC’s allegations.

Information that apps can gain from your mobile devices includes anything you have given an app permission to have, such as your location, who you have in your contact list or photos in your gallery.

If you give an app permission to see where you are while the app is running, for instance, the platform can access your location anytime the app is running. Providing access to contacts may provide an app with the phone numbers, names and emails of all the people that you know.

Cross-application data collection

Companies can also gain information about what you do across different apps by acquiring information collected by other apps and platforms.

Android screenshot – white and green text on a black background
The settings on an Android phone show that Meta uses information it collects about you to target ads it shows you in its apps – and also in other apps and on other platforms – by default.
Jack West, CC BY-ND

This is common with social media companies. This allows companies to, for example, show you ads based on what you like or recently looked at on other apps. If you’ve searched for something on Amazon and then noticed an ad for it on Instagram, it’s probably because Amazon shared that information with Instagram.

This combined data collection has made targeted advertising so accurate that people have reported that they feel like their devices are listening to them.

Companies, including Google, Meta, X, TikTok and Snapchat, can build detailed user profiles based on collected information from all the apps and social media platforms you use. They use the profiles to show you ads and posts that match your interests to keep you engaged. They also sell the profile information to advertisers.

Meanwhile, researchers have found that Meta and Yandex, a Russian search engine, have overcome controls in mobile operating system software that ordinarily keep people’s web-browsing data anonymous. Each company puts code on its webpages that used local IPs to pass a person’s browsing history, which is supposed to remain private, to mobile apps installed on that person’s phone, de-anonymizing the data. Yandex has been conducting this tracking since 2017, while Meta began in September 2024, according to the researchers.

What you can do about it

If you use apps that collect your data in some way, including those that give you directions, track your workouts or help you contact someone, or if you use social media platforms, your privacy is at risk.

Aside from entirely abandoning modern technology, there are several steps you can take to limit access – at least in part – to your private information.

Read the privacy policy of each app or social media platform you use. Although privacy policy documents can be long, tedious and sometimes hard to read, they explain how social media platforms collect, process, store and share your data.

Check a policy by making sure it can answer three questions: what data does the app collect, how does it collect the data, and what is the data used for. If you can’t answer all three questions by reading the policy, or if any of the answers don’t sit well with you, consider skipping the app until there’s a change in its data practices.

Remove unnecessary permissions from mobile apps to limit the amount of information that applications can gather from you.

Be aware of the privacy settings that might be offered by the apps or social media platforms you use, including any setting that allows your personal data to affect your experience or shares information about you with other users or applications.

These privacy settings can give you some control. We recommend that you disable “off-app activity” and “personalization” settings. “Off-app activity” allows an app to record which other apps are installed on your phone and what you do on them. Personalization settings allow an app to use your data to tailor what it shows you, including advertisements.

Review and update these settings regularly because permissions sometimes change when apps or your phone update. App updates may also add new features that can collect your data. Phone updates may also give apps new ways to collect your data or add new ways to preserve your privacy.

Use private browser windows or reputable virtual private networks software, commonly referred to as VPNs, when using apps that connect to the internet and social media platforms. Private browsers don’t store any account information, which limits the information that can be collected. VPNs change the IP address of your machine so that apps and platforms can’t discover your location.

Finally, ask yourself whether you really need every app that’s on your phone. And when using social media, consider how much information you want to reveal about yourself in liking and commenting on posts, sharing updates about your life, revealing locations you visited and following celebrities you like.


This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it.The Conversation

Kassem Fawaz, Associate Professor of Electrical and Computer Engineering, University of Wisconsin-Madison and Jack West, PhD Student in Computer Science, University of Wisconsin-Madison

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Valutico Acquires AI Innovator Paraloq Analytics to Revolutionize Private Company Analysis

VIENNA, Austria – JUNE 19, 2025 – Valutico, a global leader in valuation and financial analysis software, today announced its strategic acquisition of Paraloq Analytics, a Vienna-based artificial intelligence (AI) specialist. This acquisition will integrate Paraloq Analytics’ advanced AI capabilities into Valutico’s renowned platform, empowering financial professionals with unprecedented data-driven insights and efficiency.

The two Vienna-headquartered companies have previously cooperated on the development of Done Diligence, an innovative tool that uses advanced AI agents to empower humans to perform due diligence work more efficiently. Now the companies are joining forces to create a powerhouse to further drive digital transformation in the Financial Services and Banking industries. By embedding AI-driven analytics and enhanced data interpretation into its platform, Valutico will offer its global client base even more robust, accurate, and forward-looking valuation solutions.

“We are thrilled to welcome Paraloq Analytics to the Valutico family,” said Paul Resch, CEO of Valutico. “Paraloq’s deep and long standing experience with AI, particularly in the Banking sector, perfectly complements our mission to provide the most sophisticated and user-friendly financial analysis platform on the market. This acquisition will significantly accelerate our product roadmap, bringing next-generation intelligence to our customers and further solidifying our leadership position in the space.”

Paraloq Analytics, founded in 2019 by two Econometrics PhD candidates of the University of St.Gallen, has quickly established itself as an innovator in applying AI to complex challenges in Banking and related fields. Their expertise in areas such as econometrics, machine learning, and AI software development will be instrumental in enhancing Valutico’s data analytics capabilities and augmenting its users’ experience with analysing qualitative information.

“Joining forces with Valutico is an exciting new chapter for Paraloq Analytics,” said Paraloq Co-Founder Maximilian Arrich. “Valutico’s global reach and established platform provide the perfect launchpad for our AI technologies. Over the past year of working together, we built a common vision for the future of financial analysis – one that is more data-driven, intelligent, and efficient. We are eager to contribute our expertise to create truly transformative tools for Finance professionals.”

Strategic Benefits of the Acquisition:

  • Enhanced AI-Powered Insights: Integration of Paraloq’s technology will complement Valutico’s analysis of structured data (e.g. financial information) with diverse sources of unstructured data (e.g. contents of a virtual data room, news, social media, etc)

  • Market Access: Valutico’s global reach will accelerate the roll out of Paraloq’s technology to new client verticals and geographies

  • Talent Acquisition: The Paraloq team will complement the Valutico family and further strengthen its AI capabilities

  • Innovation Acceleration: The combined expertise will fast-track the development of new, cutting-edge features for Valutico users.

Valutico will begin integrating Paraloq Analytics’ technology and team immediately, with Paraloq founder Maxilian Arrich joining Valutico’s management team as VP of AI Research. Clients can expect to see an acceleration of AI-enhanced feature rollouts in upcoming platform updates.

Terms of the acquisition were not disclosed.

About Valutico:

Valutico is a leading global provider of business valuation software. Founded in 2017, Valutico empowers financial professionals and valuation experts in over 90 countries to perform high-quality and efficient valuations with its comprehensive data, automated financial models, and intuitive platform. Valutico is headquartered in Vienna, Austria, with offices in the UK, US, Germany, the Netherlands and Singapore.

 

About Paraloq Analytics:

Paraloq Analytics is a Vienna-based company founded in 2019, specializing in artificial intelligence, machine learning, and econometric solutions for the Banking industry. Paraloq helps businesses unlock the power of their data by developing and implementing bespoke AI-driven software and providing expert data science and AI consulting.

AI helps tell snow leopards apart, improving population counts for these majestic mountain predators

By Eve Bohnett, University of Florida 

Snow leopards are known as the “ghosts of the mountains” for a reason. Imagine waiting for months in the harsh, rugged mountains of Asia, hoping to catch even a glimpse of one. These elusive big cats move silently across rocky slopes, their pale coats blending so seamlessly with snow and stone that even the most seasoned biologists seldom spot them in the wild.

Travel writer Peter Matthiessen spent two months in 1973 searching the Tibetan plateau for them and wrote a 300-page book about the effort. He never saw one. Forty years later, Peter’s son Alex retraced his father’s steps – and didn’t see one either.

Researchers have struggled to come up with a figure for the global population. In 2017, the International Union for Conservation of Nature reclassified the snow leopard from endangered to vulnerable, citing estimates of between 2,500 and 10,000 adults in the wild. However, the group also warned that numbers continue to decline in many areas due to habitat loss, poaching and human-wildlife conflict. Those who study these animals want to help protect the species and their habitat – if only we can determine exactly where they live and how many there are.

Traditional tracking methods – searching for footprints, droppings and other signs – have their limits. Instead of waiting for a lucky face-to-face encounter, conservationists from the Wildlife Conservation Society, led by experts including Stéphane Ostrowski and Sorosh Poya Faryabi, began deploying automated camera traps in Afghanistan. These devices snap photos whenever movement is detected, capturing thousands of images over months, all in hopes of obtaining a rare glimpse of a snow leopard.

But capturing images is only half the battle. The next, even harder task is telling one snow leopard apart from another.

Two images of snow leopards.
Are these the same animal or different ones? It’s really hard to tell.
Eve Bohnett, CC BY-ND

At first glance, it might sound simple: Each snow leopard has a unique pattern of black rosettes on its coat, like a fingerprint or a face in a crowd. Yet in practice, identifying individuals by these patterns is slow, subjective and prone to error. Photos may be taken at odd angles, under poor lighting, or with parts of the animal obscured – making matches tricky.

A common mistake happens when photos from different cameras are marked as depicting different animals when they actually show the same individual, inflating population estimates. Worse, camera trap images can get mixed up or misfiled, splitting encounters of one cat across multiple batches and identities.

I am a data analyst working with Wildlife Conservation Society and other partners at Wild Me. My work and others’ has found that even trained experts can misidentify animals, failing to recognize repeat visitors at locations monitored by motion-sensing cameras and counting the same animal more than once. One study found that the snow leopard population was overestimated by more than 30% because of these human errors.

To avoid these pitfalls, researchers follow camera sorting guidelines: At least three clear pattern differences or similarities must be confirmed between two images to declare them the same or different cats. Images too blurry, too dark or taken from difficult angles may have to be discarded. Identification efforts range from easy cases with clear, full-body shots to ambiguous ones needing collaboration and debate. Despite these efforts, variability remains, and more experienced observers tend to be more accurate.

Now people trying to count snow leopards are getting help from artificial intelligence systems, in two ways.

Spotting the spots

Modern AI tools are revolutionizing how we process these large photo libraries. First, AI can rapidly sort through thousands of images, flagging those that contain snow leopards and ignoring irrelevant ones such as those that depict blue sheep, gray-and-white mountain terrain, or shadows.

A snow leopard stands amid rocks.
Unique spots and spot patterns are key to telling snow leopards apart.
Eve Bohnett, CC BY-NC-ND

AI can identify individual snow leopards by analyzing their unique rosette patterns, even when poses or lighting vary. Each snow leopard encounter is compared with a catalog of previously identified photos and assigned a known ID if there is a match, or entered as a new individual if not.

In a recent study, several colleagues and I evaluated two AI algorithms, both separately and in tandem.

The first algorithm, called HotSpotter, identifies individual snow leopards by comparing key visual features such as coat patterns, highlighting distinctive “hot spots” with a yellow marker.

The second is a newer method called pose invariant embeddings, which operates similar to facial recognition technology: It recognizes layers of abstract features in the data, identifying the same animal regardless of how it is positioned in the photo or what kind of lighting there may be.

We trained these systems using a curated dataset of photos of snow leopards from zoos in the U.S., Europe and Tajikistan, and with images from the wild, including in Afghanistan.

Alone, each model worked about 74% of the time, correctly identifying the cat from a large photo library. But when combined, the two systems together were correct 85% of the time.

These algorithms were integrated into Wildbook, an open-source, web-based software platform developed by the nonprofit organization Wild Me and now adopted by ConservationX. We deployed the combined system on a free website, Whiskerbook.org, where researchers can upload images, seek matches using the algorithms, and confirm those matches with side-by-side comparisons. This site is among a growing family of AI-powered wildlife platforms that are helping conservation biologists work more efficiently and more effectively at protecting species and their habitats.

Two images of snow leopards, one in daylight and one in infrared light.
A view from an online wildlife-tracking system suggests a possible match for a snow leopard caught by a remote camera.
Wildbook/Eve Bohnett, CC BY-ND

Humans still needed

These AI systems aren’t error-proof. AI quickly narrows down candidates and flags likely matches, but expert validation ensures accuracy, especially with tricky or ambiguous photos.

Another study we conducted pitted AI-assisted groups of experts and novices against each other. Each was given a set of three to 10 images of 34 known captive snow leopards and asked to use the Whiskerbook platform to identify them. They were also asked to estimate how many individual animals were in the set of photos.

The experts accurately matched about 90% of the images and delivered population estimates within about 3% of the true number. In contrast, the novices identified only 73% of the cats and underestimated the total number, sometimes by 25% or more, incorrectly merging two individuals into one.

Both sets of results were better than when experts or novices did not use any software.

The takeaway is clear: Human expertise remains important, and combining it with AI support leads to the most accurate results. My colleagues and I hope that by using tools like Whiskerbook and the AI systems embedded in them, researchers will be able to more quickly and more confidently study these elusive animals.

With AI tools like Whiskerbook illuminating the mysteries of these mountain ghosts, we have another way to safeguard snow leopards – but success depends on continued commitment to protecting their fragile mountain homes.The Conversation

About the Author:

Eve Bohnett, Assistant Scholar, Center for Landscape Conservation Planning, University of Florida

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

By Chetan Jaiswal, Quinnipiac University 

Whether you’re streaming a show, paying bills online or sending an email, each of these actions relies on computer programs that run behind the scenes. The process of writing computer programs is known as coding. Until recently, most computer code was written, at least originally, by human beings. But with the advent of generative artificial intelligence, that has begun to change.

Now, just as you can ask ChatGPT to spin up a recipe for a favorite dish or write a sonnet in the style of Lord Byron, you can now ask generative AI tools to write computer code for you. Andrej Karpathy, an OpenAI co-founder who previously led AI efforts at Tesla, recently termed this “vibe coding.”

For complete beginners or nontechnical dreamers, writing code based on vibes – feelings rather than explicitly defined information – could feel like a superpower. You don’t need to master programming languages or complex data structures. A simple natural language prompt will do the trick.

How it works

Vibe coding leans on standard patterns of technical language, which AI systems use to piece together original code from their training data. Any beginner can use an AI assistant such as GitHub Copilot or Cursor Chat, put in a few prompts, and let the system get to work. Here’s an example:

“Create a lively and interactive visual experience that reacts to music, user interaction or real-time data. Your animation should include smooth transitions and colorful and lively visuals with an engaging flow in the experience. The animation should feel organic and responsive to the music, user interaction or live data and facilitate an experience that is immersive and captivating. Complete this project using JavaScript or React, and allow for easy customization to set the mood for other experiences.”

But AI tools do this without any real grasp of specific rules, edge cases or security requirements for the software in question. This is a far cry from the processes behind developing production-grade software, which must balance trade-offs between product requirements, speed, scalability, sustainability and security. Skilled engineers write and review the code, run tests and establish safety barriers before going live.

But while the lack of a structured process saves time and lowers the skills required to code, there are trade-offs. With vibe coding, most of these stress-testing practices go out the window, leaving systems vulnerable to malicious attacks and leaks of personal data.

And there’s no easy fix: If you don’t understand every – or any – line of code that your AI agent writes, you can’t repair the code when it breaks. Or worse, as some experts have pointed out, you won’t notice when it’s silently failing.

The AI itself is not equipped to carry out this analysis either. It recognizes what “working” code usually looks like, but it cannot necessarily diagnose or fix deeper problems that the code might cause or exacerbate.

IBM computer scientist Martin Keen explains the difference between AI programming and traditional programming.

Why it matters

Vibe coding could be just a flash-in-the-pan phenomenon that will fizzle before long, but it may also find deeper applications with seasoned programmers. The practice could help skilled software engineers and developers more quickly turn an idea into a viable prototype. It could also enable novice programmers or even amateur coders to experience the power of AI, perhaps motivating them to pursue the discipline more deeply.

Vibe coding also may signal a shift that could make natural language a more viable tool for developing some computer programs. If so, it would echo early website editing systems known as WYSIWYG editors that promised designers “what you see is what you get,” or “drag-and-drop” website builders that made it easy for anyone with basic computer skills to launch a blog.

For now, I don’t believe that vibe coding will replace experienced software engineers, developers or computer scientists. The discipline and the art are much more nuanced than what AI can handle, and the risks of passing off “vibe code” as legitimate software are too great.

But as AI models improve and become more adept at incorporating context and accounting for risk, practices like vibe coding might cause the boundary between AI and human programmer to blur further.The Conversation

About the Author:

Chetan Jaiswal, Associate Professor of Computer Science, Quinnipiac University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI literacy: What it is, what it isn’t, who needs it and why it’s hard to define

By Daniel S. Schiff, Purdue University; Arne Bewersdorff, Technical University of Munich, and Marie Hornberger, Technical University of Munich 

It is “the policy of the United States to promote AI literacy and proficiency among Americans,” reads an executive order President Donald Trump issued on April 23, 2025. The executive order, titled Advancing Artificial Intelligence Education for American Youth, signals that advancing AI literacy is now an official national priority.

This raises a series of important questions: What exactly is AI literacy, who needs it, and how do you go about building it thoughtfully and responsibly?

The implications of AI literacy, or lack thereof, are far-reaching. They extend beyond national ambitions to remain “a global leader in this technological revolution” or even prepare an “AI-skilled workforce,” as the executive order states. Without basic literacy, citizens and consumers are not well equipped to understand the algorithmic platforms and decisions that affect so many domains of their lives: government services, privacy, lending, health care, news recommendations and more. And the lack of AI literacy risks ceding important aspects of society’s future to a handful of multinational companies.

How, then, can institutions help people understand and use – or resist – AI as individuals, workers, parents, innovators, job seekers, students, employers and citizens? We are a policy scientist and two educational researchers who study AI literacy, and we explore these issues in our research.

What AI literacy is and isn’t

At its foundation, AI literacy includes a mix of knowledge, skills and attitudes that are technical, social and ethical in nature. According to one prominent definition, AI literacy refers to “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.”

AI literacy is not simply programming or the mechanics of neural networks, and it is certainly not just prompt engineering – that is, the act of carefully writing prompts for chatbots. Vibe coding, or using AI to write software code, might be fun and important, but restricting the definition of literacy to the newest trend or the latest need of employers won’t cover the bases in the long term. And while a single master definition may not be needed, or even desirable, too much variation makes it tricky to decide on organizational, educational or policy strategies.

Who needs AI literacy? Everyone, including the employees and students using it, and the citizens grappling with its growing impacts. Every sector and sphere of society is now involved with AI, even if this isn’t always easy for people to see.

Exactly how much literacy everyone needs and how to get there is a much tougher question. Are a few quick HR training sessions enough, or do we need to embed AI across K-12 curricula and deliver university micro credentials and hands-on workshops? There is much that researchers don’t know, which leads to the need to measure AI literacy and the effectiveness of different training approaches.

Ethics is an important aspect of AI literacy.

Measuring AI literacy

While there is a growing and bipartisan consensus that AI literacy matters, there’s much less consensus on how to actually understand people’s AI literacy levels. Researchers have focused on different aspects, such as technical or ethical skills, or on different populations – for example, business managers and students – or even on subdomains like generative AI.

A recent review study identified more than a dozen questionnaires designed to measure AI literacy, the vast majority of which rely on self-reported responses to questions and statements such as “I feel confident about using AI.” There’s also a lack of testing to see whether these questionnaires work well for people from different cultural backgrounds.

Moreover, the rise of generative AI has exposed gaps and challenges: Is it possible to create a stable way to measure AI literacy when AI is itself so dynamic?

In our research collaboration, we’ve tried to help address some of these problems. In particular, we’ve focused on creating objective knowledge assessments, such as multiple-choice surveys tested with thorough statistical analyses to ensure that they accurately measure AI literacy. We’ve so far tested a multiple-choice survey in the U.S., U.K. and Germany and found that it works consistently and fairly across these three countries.

There’s a lot more work to do to create reliable and feasible testing approaches. But going forward, just asking people to self-report their AI literacy probably isn’t enough to understand where different groups of people are and what supports they need.

Approaches to building AI literacy

Governments, universities and industry are trying to advance AI literacy.

Finland launched the Elements of AI series in 2018 with the hope of educating its general public on AI. Estonia’s AI Leap initiative partners with Anthropic and OpenAI to provide access to AI tools for tens of thousands of students and thousands of teachers. And China is now requiring at least eight hours of AI education annually as early as elementary school, which goes a step beyond the new U.S. executive order. On the university level, Purdue University and the University of Pennsylvania have launched new master’s in AI programs, targeting future AI leaders.

Despite these efforts, these initiatives face an unclear and evolving understanding of AI literacy. They also face challenges to measuring effectiveness and minimal knowledge on what teaching approaches actually work. And there are long-standing issues with respect to equity − for example, reaching schools, communities, segments of the population and businesses that are stretched or under-resourced.

Next moves on AI literacy

Based on our research, experience as educators and collaboration with policymakers and technology companies, we think a few steps might be prudent.

Building AI literacy starts with recognizing it’s not just about tech: People also need to grasp the social and ethical sides of the technology. To see whether we’re getting there, we researchers and educators should use clear, reliable tests that track progress for different age groups and communities. Universities and companies can try out new teaching ideas first, then share what works through an independent hub. Educators, meanwhile, need proper training and resources, not just additional curricula, to bring AI into the classroom. And because opportunity isn’t spread evenly, partnerships that reach under-resourced schools and neighborhoods are essential so everyone can benefit.

Critically, achieving widespread AI literacy may be even harder than building digital and media literacy, so getting there will require serious investment – not cuts – to education and research.

There is widespread consensus that AI literacy is important, whether to boost AI trust and adoption or to empower citizens to challenge AI or shape its future. As with AI itself, we believe it’s important to approach AI literacy carefully, avoiding hype or an overly technical focus. The right approach can prepare students to become “active and responsible participants in the workforce of the future” and empower Americans to “thrive in an increasingly digital society,” as the AI literacy executive order calls for.

The Conversation will be hosting a free webinar on practical and safe use of AI with our tech editor and an AI expert on June 24 at 2pm ET/11am PT. Sign up to get your questions answered.The Conversation

About the Author:

Daniel S. Schiff, Assistant Professor of Political Science, Purdue University; Arne Bewersdorff, Post Doctoral Researcher in Educational Sciences, Technical University of Munich, and Marie Hornberger, Research Associate at the School of Social Sciences and Technology, Technical University of Munich

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Will AI take your job? The answer could hinge on the 4 S’s of the technology’s advantages over humans

By Bruce Schneier, Harvard Kennedy School and Nathan Sanders, Harvard University 

If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.

But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise — and where they don’t — will be key to adapting to the AI-infused workforce.

AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.

Speed

First, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.

AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.

Real-time performance matters in these cases, and the speed of AI is necessary to enable them.

Scale

The second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.

AI models can do this for every single product, TV show, website and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price – thousands of times per second.

Scope

Next, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.

It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.

How AI is affecting the job market.

Sophistication

Finally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.

Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions – often many billions – among many factors. Neural networks now power the best chess-playing models and most other AI systems.

Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.

This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.

But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.

Context matters

Those are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly – yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.

Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.

For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.

It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.

Equally, when speed, scale, scope and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.

Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.

Where the advantage lies

Keep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.The Conversation

About the Author:

Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School and Nathan Sanders, Affiliate, Berkman Klein Center for Internet & Society, Harvard University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

By Mark Finlayson, Florida International University and Azwad Anjum Islam, Florida International University 

It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it’s a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs.

This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election.

While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content.

At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references.

Disinformation vs. misinformation

In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren’t isolated incidents. They were part of an organized campaign, powered in part by AI.

Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information – getting facts wrong – disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook.

Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate U.S. politics and stoke divisions among Americans.

Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don’t just help people remember – they help us feel. They foster emotional connections and shape our interpretations of social and political events.

Stories have profound effects on human beliefs and behavior.

This makes them especially powerful tools for persuasion – and, consequently, for spreading disinformation. A compelling narrative can override skepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data.

Usernames, cultural context and narrative time

Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn’t add up.

Narratives are not confined to the content users share – they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience.

For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations.

Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection – one that considers not just what is said but who appears to be saying it and why.

Also, stories don’t always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between.

Humans handle this effortlessly – we’re used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge.

Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion.

Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation.

Consider the following sentence: “The woman in the white dress was filled with joy.” In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive.

In order to use AI to detect disinformation that weaponizes symbols, sentiments and storytelling within targeted communities, it’s critical to give AI this sort of cultural literacy. In our research, we’ve found that training AI on diverse cultural narratives improves its sensitivity to such distinctions.

Who benefits from narrative-aware AI?

Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time.

In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable.

Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be skeptical of suspect stories, thus counteracting falsehoods before they take root.

As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.The Conversation

About the Author:

Mark Finlayson, Associate Professor of Computer Science, Florida International University and Azwad Anjum Islam, Ph.D. Student in Computing and Information Sciences, Florida International University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Can you upload a human mind into a computer? A neuroscientist ponders what’s possible

By Dobromir Rahnev, Georgia Institute of Technology 

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to [email protected].


Is it possible to upload the consciousness of your mind into a computer? – Amreen, age 15, New Delhi, India


The concept, cool yet maybe a little creepy, is known as mind uploading. Think of it as a way to create a copy of your brain, a transmission of your mind and consciousness into a computer. There you would live digitally, perhaps forever. You’d have an awareness of yourself, you’d retain your memories and still feel like you. But you wouldn’t have a body.

Within that simulated environment, you could do anything you do in real life – eating, driving a car, playing sports. You could also do things impossible in the real world, like walking through walls, flying like a bird or traveling to other planets. The only limit is what science can realistically simulate.

Doable? Theoretically, mind uploading should be possible. Still, you may wonder how it could happen. After all, researchers have barely begun to understand the brain.

Yet science has a track record of turning theoretical possibilities into reality. Just because a concept seems terribly, unimaginably difficult doesn’t mean it’s impossible. Consider that science took humankind to the Moon, sequenced the human genome and eradicated smallpox. Those things too were once considered unlikely.

As a brain scientist who studies perception,
I fully expect mind uploading to one day be a reality. But as of today, we’re nowhere close.

Living in a laptop

The brain is often regarded as the most complex object in the known universe. Replicating all that complexity will be extraordinarily difficult.

One requirement: The uploaded brain needs the same inputs it always had. In other words, the external world must be available to it. Even cloistered inside a computer, you would still need a simulation of your senses, a reproduction of the ability to see, hear, smell, touch, feel – as well as move, blink, detect your heart rate, set your circadian rhythm and do thousands of other things.

But why is that? Couldn’t you just exist in a pure mental bubble, inside the computer without sensory input?

Depriving people of their senses, like putting them in total darkness, or in a room without sound, is known as sensory deprivation, and it’s regarded as a form of torture. People who have trouble sensing their bodily signals – thirst, hunger, pain, an itch – often have mental health challenges.

That’s why for mind uploading to work, the simulation of your senses and the digital environment you’re in must be exceptionally accurate. Even minor distortions could have serious mental consequences.

For now, researchers don’t have the computing power, much less the scientific knowledge, to perform such simulations.

New and updated scanning technology is a necessity.

Scanning billions of pinheads

The first task for a successful mind upload: Scanning, then mapping the complete 3D structure of the human brain. This requires the equivalent of an extraordinarily sophisticated MRI machine that could detail the brain in an advanced way. At the moment, scientists are only at the very early stages of brain mapping – which includes the entire brain of a fly and tiny portions of a mouse brain.

In a few decades, a complete map of the human brain may be possible. Yet even capturing the identities of all 86 billion neurons, all smaller than a pinhead, plus their trillions of connections, still isn’t enough. Uploading this information by itself into a computer won’t accomplish much. That’s because each neuron constantly adjusts its functioning, and that has to be modeled, too.

It’s hard to know how many levels down researchers must go to make the simulated brain work. Is it enough to stop at the molecular level? Right now, no one knows.

Technological immortality comes with significant ethical concerns.

2045? 2145? Or later?

Knowing how the brain computes things might provide a shortcut. That would let researchers simulate only the essential parts of the brain, and not all biological idiosyncrasies. It’s easier to manufacture a new car knowing how a car works, compared to attempting to scan and replicate an existing car without any knowledge of its inner workings.

However, this approach requires that scientists figure out how the brain creates thoughts – how collections of thousands to millions of neurons come together to perform the computations that make the human mind come alive. It’s hard to express how very far we are from this.

Here’s another way: Replace the 86 billion real neurons with artificial ones, one at a time. That approach would make mind uploading much easier. Right now, though, scientists can’t replace even a single real neuron with an artificial one.

But keep in mind the pace of technology is accelerating exponentially. It’s reasonable to expect spectacular improvements in computing power and artificial intelligence in the coming decades.

One other thing is certain: Mind uploading will certainly have no problem finding funding. Many billionaires appear glad to part with lots of their money for a shot at living forever.

Although the challenges are enormous and the path forward uncertain, I believe that one day, mind uploading will be a reality. The most optimistic forecasts pinpoint the year 2045, only 20 years from now. Others say the end of this century.

But in my mind, both of these predictions are probably too optimistic. I would be shocked if mind uploading works in the next 100 years. But it might happen in 200 – which means the first person to live forever could be born in your lifetime.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to [email protected]. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.The Conversation

Dobromir Rahnev, Associate Professor of Psychology, Georgia Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Do photons wear out? An astrophysicist explains light’s ability to travel vast cosmic distances without losing energy

By Jarred Roberts, University of California, San Diego 

My telescope, set up for astrophotography in my light-polluted San Diego backyard, was pointed at a galaxy unfathomably far from Earth. My wife, Cristina, walked up just as the first space photo streamed to my tablet. It sparkled on the screen in front of us.

“That’s the Pinwheel galaxy,” I said. The name is derived from its shape – albeit this pinwheel contains about a trillion stars.

The light from the Pinwheel traveled for 25 million years across the universe – about 150 quintillion miles – to get to my telescope.

My wife wondered: “Doesn’t light get tired during such a long journey?”

Her curiosity triggered a thought-provoking conversation about light. Ultimately, why doesn’t light wear out and lose energy over time?

Let’s talk about light

I am an astrophysicist, and one of the first things I learned in my studies is how light often behaves in ways that defy our intuitions.

A photo of outer space that shows a galaxy shaped like a pinwheel.
The author’s photo of the Pinwheel galaxy.
Jarred Roberts

Light is electromagnetic radiation: basically, an electric wave and a magnetic wave coupled together and traveling through space-time. It has no mass. That point is critical because the mass of an object, whether a speck of dust or a spaceship, limits the top speed it can travel through space.

But because light is massless, it’s able to reach the maximum speed limit in a vacuum – about 186,000 miles (300,000 kilometers) per second, or almost 6 trillion miles per year (9.6 trillion kilometers). Nothing traveling through space is faster. To put that into perspective: In the time it takes you to blink your eyes, a particle of light travels around the circumference of the Earth more than twice.

As incredibly fast as that is, space is incredibly spread out. Light from the Sun, which is 93 million miles (about 150 million kilometers) from Earth, takes just over eight minutes to reach us. In other words, the sunlight you see is eight minutes old.

Alpha Centauri, the nearest star to us after the Sun, is 26 trillion miles away (about 41 trillion kilometers). So by the time you see it in the night sky, its light is just over four years old. Or, as astronomers say, it’s four light years away.

Imagine – a trip around the world at the speed of light.

With those enormous distances in mind, consider Cristina’s question: How can light travel across the universe and not slowly lose energy?

Actually, some light does lose energy. This happens when it bounces off something, such as interstellar dust, and is scattered about.

But most light just goes and goes, without colliding with anything. This is almost always the case because space is mostly empty – nothingness. So there’s nothing in the way.

When light travels unimpeded, it loses no energy. It can maintain that 186,000-mile-per-second speed forever.

It’s about time

Here’s another concept: Picture yourself as an astronaut on board the International Space Station. You’re orbiting at 17,000 miles (about 27,000 kilometers) per hour. Compared with someone on Earth, your wristwatch will tick 0.01 seconds slower over one year.

That’s an example of time dilation – time moving at different speeds under different conditions. If you’re moving really fast, or close to a large gravitational field, your clock will tick more slowly than someone moving slower than you, or who is further from a large gravitational field. To say it succinctly, time is relative.

An astronaut floats weightless aboard the International Space Station.
Even astronauts aboard the International Space Station experience time dilation, although the effect is extremely small.
NASA

Now consider that light is inextricably connected to time.
Picture sitting on a photon, a fundamental particle of light; here, you’d experience maximum time dilation. Everyone on Earth would clock you at the speed of light, but from your reference frame, time would completely stop.

That’s because the “clocks” measuring time are in two different places going vastly different speeds: the photon moving at the speed of light, and the comparatively slowpoke speed of Earth going around the Sun.

What’s more, when you’re traveling at or close to the speed of light, the distance between where you are and where you’re going gets shorter. That is, space itself becomes more compact in the direction of motion – so the faster you can go, the shorter your journey has to be. In other words, for the photon, space gets squished.

Which brings us back to my picture of the Pinwheel galaxy. From the photon’s perspective, a star within the galaxy emitted it, and then a single pixel in my backyard camera absorbed it, at exactly the same time. Because space is squished, to the photon the journey was infinitely fast and infinitely short, a tiny fraction of a second.

But from our perspective on Earth, the photon left the galaxy 25 million years ago and traveled 25 million light years across space until it landed on my tablet in my backyard.

And there, on a cool spring night, its stunning image inspired a delightful conversation between a nerdy scientist and his curious wife.The Conversation

About the Author:

Jarred Roberts, Project Scientist, University of California, San Diego

This article is republished from The Conversation under a Creative Commons license. Read the original article.