Archive for Programming – Page 2

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

By Paul Harrison, Deakin University 

In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artificial intelligence (AI), celebrating their work as “human-made”.

But in these advertising campaigns on TV, billboards on New York streets and on social media, the companies are signalling something larger.

Even Apple’s new series release, Pluribus, includes the phrase “Made by Humans” in the closing credits.

Other brands including H&M and Guess have faced a backlash for using AI brand ambassadors instead of humans.

These gestures suggest we have reached a cultural moment in the evolution of this technology, where people are unsure what creativity means when machines can now produce much of what we see, hear and perhaps even be moved by.

This feels like efficiency – for executives

At a surface level, AI offers efficiencies such as faster production, cheaper visuals, instant personalisation, and automated decisions. Government and business have rushed toward it, drawn by promises of productivity and innovation. And there is no doubt that this promise is deeply seductive. Indeed, efficiency is what AI excels at.

In the context of marketing and advertising, this “promise”, at least at face value, seems to translate to smaller marketing budgets, better targeting, automated decisions (including by chatbots) and rapid deployment of ad campaigns.

For executives, this is exciting and feels like real progress, with cheaper, faster and more measurable brand campaigns.

But advertising has never really just been about efficiency. It has always relied on a degree of emotional truth and creative mystery. That psychological anchor – a belief that human intention sits behind what we are looking at – turns out to matter more than we like to admit.

Turns out, people care about authenticity

Indeed, people often value objects more when they believe those objects carry traces of a person’s intention or history. This is the case even when those images don’t differ in any material way from a computer-generated image.

To some degree, this signals consumers are sensitive to the presence of a human creator, because when visually compelling computer-generated images are labelled as machine-made, people tend to rate them less favourably.

Indeed, when the same paintings are randomly labelled as either “human created” or “AI created”, people consistently judge the works they believe to be “human created” as more beautiful, meaningful and profound.

It seems the simple presence of an AI label reduces the perceived creativity and value.

A betrayal of creativity

However, there is an important caveat here. These studies rely on people being told who made the work. The effect is a result of attribution, not perception. And so this limitation points towards a deeper problem.

If evaluations change purely because people believe a work was machine made, the response is not about quality, it is about meaning. It reflects a belief that creativity is tied to intention, effort and expression. These are qualities an algorithm doesn’t possess, even when it creates something visually persuasive. In other words, the label carries emotional weight.

There are, of course, obvious examples of when AI goes comedically wrong. In early 2024, the Queensland Symphony Orchestra promoted its brand using a very strange AI-generated image most people instantly recognised as unnatural. Part of the backlash, along with the unsettling weirdness of the image, was the perception an arts organisation was betraying human creativity.

But as AI systems improve, people often struggle to distinguish synthetic from real. Indeed, AI generated faces are judged by many to be just as real, and sometimes more trustworthy, than actual photographs.

Research shows people overestimate their ability to detect deepfakes, and often mistake deepfake videos as authentic.

Although we can see emerging patterns here, the empirical research in this area is being outpaced by AI’s evolving capabilities. So we are often trying to understand psychological responses to a technology that has already evolved since the research took place.

As AI becomes more sophisticated, the boundary between human and machine-made creativity will become harder to perceive. Commerce may not be particularly troubled by this. If the output performs well, the question of origin become secondary.

Why we value creativity

But creative work has never been only about generating content. It is a way for people to express emotion, experience, memory, dissent and interpretation.

And perhaps this is why the rise of “Made by Humans” actually matters. Marketers are not simply selling provenance, they are responding to a deeper cultural anxiety about authorship in a moment when the boundaries of creativity are becoming harder to perceive.

Indeed, one could argue there is an ironic tension here. Marketing is one of the professions most exposed to being superseded by the same technology marketers are now trying to differentiate themselves from.

So whether these human-made claims are a commercial tactic or a sincere defence of creative intention, there is significantly more at stake than just another way to drive sales.The Conversation

About the Author:

Paul Harrison, Director, Master of Business Administration Program (MBA); Co-Director, Better Consumption Lab, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Yes, there is an AI investment bubble – here are three scenarios for how it could end

By Sergi Basco

Booms and busts are a recurring feature of modern economics, but when an asset’s value becomes overinflated, a boom quickly becomes a bubble.

The two most recent major bubble episodes were the dot-com bubble in the United States (1996-2000) and the housing bubbles that emerged around 2006 in different countries. Both ended in recession – the former relatively mild, and the latter catastrophically bad. Recent, dizzying increases in the stock prices of AI-related companies have now got many investors asking “are we witnessing another asset price bubble?”

It is important to put the current AI boom in context. The stock price of Nvidia – which manufactures many of the computer chips that power the AI industry – has multiplied by 13 since the start of 2023. Stocks in other AI-related companies like Microsoft and Google’s parent company Alphabet have multiplied by 2.1 and 3.2, respectively. In comparison, the S&P 500, which tracks the stocks of the most important US firms, has multiplied by just 1.8 in the same period.

It is important to emphasise that these AI-related companies are included in the S&P 500, making the difference with non-AI companies even larger. Accordingly, it seems that there is an AI-bubble – but it won’t necessarily end in a repeat of 2008.

How a bubble forms

The price of any stock can be broken down into two components: its fundamental value, and the inflated bubble value. If the stock’s price is above its fundamental value, there is a bubble in its price.

The fundamental value of an asset is the discounted sum of its expected future dividends. The key word here is “expected”. Given that no one, not even ChatGPT, can predict the future, the fundamental value depends on the subjective expectations of each investor. They might be optimistic or pessimistic; in time, some will be proven right, and others wrong.

Optimistic investors expect that AI will change the world, and that the owners of this technology will make (almost) infinite profits. Not knowing which company will emerge victorious, they invest in all AI-related companies.

In contrast, pessimistic investors think that AI is just sophisticated software, as opposed to truly groundbreaking technology, and they will see bubbles everywhere.

A third possibility is the more sophisticated investors. These are people that think – or know – that there is a bubble, but keep investing in the hope of being able to ride the wave and get off before it is too late.

The last of these possibilities is reminiscent of the infamous quote from Citigroup CEO Chuck Prince before the 2008 housing bubble burst: “as long as the music is playing, you’ve got to get up and dance”.

As an economist, I can say safely that it is impossible for all AI-related companies to end up dominating the market. This means, beyond a doubt, that the value of at least some AI-related stocks have a large bubble component.

A shortage of assets

Asset price bubbles can be the market’s natural response to a shortage of assets. In a moment when the demand for assets exceeds the supply (especially for safe assets like government bonds), there is room for other, newer assets to emerge.

This pattern explains the emergence of, for example, the 1990s dot-com bubble and the subsequent 2000s housing bubble. In that context, the growing role of China in financial markets increased the demand for assets in the West – the money first went to dot-com companies in the 1990s and, when that bubble burst, to fund housing via mortgage-backed securities.

In today’s context, a combination of factors have paved the way for the AI bubble: excitement around new technology, low interest rates (another sign of shortage of assets) and huge amounts of of cash flowing into large corporations.

The bubble bursts: good, bad and ugly scenarios

At the very least, part of the soaring value of AI-related stocks is a bubble – and a bubble cannot stay inflated forever. It has to either burst on its own, or, ideally, be carefully deflated through targeted government or Central Bank measures. The current AI bubble could end in one of three scenarios: good, bad, or ugly.

Good: boom not bubble

During the dot-com bubble, many bad firms received too much money – the classic example was Pets.com. But the bubble also provided financing to companies like Google, which (arguably) contributed to making the internet a productivity-enhancing technology.

Something similar may happen with AI, as the current flurry of investment could, in the long run, create something good: technology that benefits humanity, and eventually yields return on investment. Without bubble-levels of cashflow, it would not be funded.

In this optimistic scenario I am assuming that AI, even though it may displace some jobs in the short term (as most technology does), will turn out to be good for workers. I am also assuming that it, obviously, won’t lead to the extinction of humanity. For this to be the case, governments need to introduce proper, robust regulations. It is also important to emphasise that there is no need for countries to invent or invest in new technologies – they must adapt them and provide applications to make them useful.

Bad: a gentle burst

All bubbles eventually burst. As things stand, we do not know when this will happen, nor the extent of the potential damage, but there will probably be a market correction when enough investors realise that multiple companies are overvalued. This decline in the stock market is bound to cause a recession.

Hopefully, it will be short-lived like the 2001 recession that followed the burst of the dot-com bubble. While no recession is painless, this one was relatively mild, and lasted less than one year in the US.

However, the burst of the AI bubble may be more painful because more households participate (either directly or indirectly via mutual funds) in the stock market than 20 years ago.

Even though the job of Central Banks is not to control asset prices, they may need to consider raising interest rates to deflate the bubble before it gets too large. The more sudden the crash, the deeper and costlier any ensuing recession will be.

Ugly: crash and burn

The burst of the AI-bubble would be ugly if it shares more features than we imagine with the 2000s housing bubble. On the positive side, AI stocks are not houses. This is good because when housing bubbles burst, the impacts on the economy are larger and longer-lasting than with other assets.

The housing bubble alone did not cause the 2008 financial crisis – it also caused the global financial system to collapse. Another reason to be optimistic is that the role of commercial banks in AI finance is much smaller than in housing – a vast amount of every bank’s money is perpetually tied up in mortgages.

However, one important caveat is that we do not how the financial system will react if these huge AI companies default on their debt. Alarmingly, this seems to be how they are currently financing new investments – a recent Bank of America analysis warned that large tech companies are relying heavily on debt to build new data centres, many of which are to cover demand that doesn’t actually exist yet.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!The Conversation


Sergi Basco, Profesor Agregado de Economia, Universitat de Barcelona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Energy Co. to Combine With Semiconductor Co. to Create AI Infrastructure

Source: Streetwise Reports (10/10/25)

Energy innovation company Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA) says it has signed a non-binding Letter of Intent (LOI) for a proposed all-stock business combination with Smartkem Inc. (SMTK:Nasdaq). Find out the terms of the proposed merger.

Energy innovation company Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA) announced it has signed a non-binding Letter of Intent (LOI) dated October 6, 2025, with Smartkem Inc. (SMTK:Nasdaq), a company pioneering a new class of organic semiconductor technology, for a proposed all-stock business combination, according to a release.

If finalized, the Proposed Transaction would create a Nasdaq-listed, U.S.-owned and controlled artificial intelligence (AI) infrastructure company, merging low-cost domestic energy with advanced semiconductor packaging and materials to meet the rising demand for AI compute capacity.

JEV said it is strategically positioned at the crossroads of energy and AI, utilizing its robust energy framework and renewable innovations to provide reliable, cost-effective power for AI data centers.

The proposed transaction aims to integrate Smartkem’s patented organic semiconductor platform into Jericho’s infrastructure to accelerate: energy-efficient AI data centers designed for next-generation workloads, advanced AI chip packaging that minimizes power consumption and heat, low-power optical data transmission for faster interconnects, and conformable sensors for environmental monitoring and operational resilience, Jericho noted in the release.

“AI compute growth is driving unprecedented demand for U.S. power and infrastructure,” Jericho Chief Executive Officer Brian Williamson said. “By combining JEV’s scalable energy platform with Smartkem’s semiconductor breakthroughs, we can deliver a new generation of faster, efficient, and more resilient AI data centers.”

Ian Jenks, chairman and CEO of Smartkem, added, “This proposed transaction positions Smartkem’s technology at the center of the largest technology build-out of our era. We believe this combination provides the pathway for our patented materials to reach their full commercial potential inside next-generation AI infrastructure.”

“Together, JEV and Smartkem are developing a unified U.S. platform for AI data centers that pairs energy resilience with advanced semiconductors, a vertically integrated strategy aimed at driving sustainable growth and creating value for shareholders,” said Anthony Amato, strategic advisor to Smartkem.

According to Jericho, some highlights of the proposed transaction include establishing a fully integrated platform covering energy supply and AI data center infrastructure and positioning the combined company to capitalize on the forecasted growth in U.S. power demand for AI data centers.

The combination of JEV’s scalable energy and infrastructure expertise with Smartkem’s patented organic semiconductor materials and OTFT technologies will drive innovation and enhance data center efficiency, JEV said.

The transaction “ensures strategic technology assets are developed, deployed, and scaled under U.S. ownership for global AI infrastructure partners,” the release said.

It also combines two experience management teams “focused on commercializing disruptive innovations at scale.”

Terms of the Proposed Transaction

Under the LOI, the proposed transaction is structured as an all-stock business combination, executed through either a share exchange or statutory merger, Jericho said. In this arrangement, Smartkem would be the surviving legal entity and continue as a publicly listed company on The Nasdaq Stock Market, becoming the “combined company.”

Upon closing, Jericho stockholders would own 65%, while Smartkem stockholders, prior to the transaction, would own 35% of the fully diluted equity securities of the combined company, subject to certain adjustments.

Brian Williamson, currently the CEO of Jericho, would assume the role of CEO for the combined company, according to the release. The board of directors would be reconstituted to include a majority of members designated by Jericho, in compliance with Nasdaq and SEC requirements.

Both companies will require significant additional capital to negotiate the proposed transaction, obtain necessary stockholder approvals, and complete the transaction. Closing is contingent on several conditions, including negotiating a definitive agreement, satisfactory due diligence, board and stockholder approvals, and Nasdaq’s approval for continued listing.

Smartkem and Jericho have agreed to a 60-day exclusivity period to negotiate the terms of a definitive agreement. This period can be terminated by either party under certain conditions, including if Smartkem does not purchase Jericho common shares valued at least US$500,000 by November 30, 2025. While the LOI is active, Smartkem will purchase Jericho common shares from treasury, subject to certain conditions.

The transaction terms outlined in the LOI are expected to be replaced by a definitive agreement. The final legal structure may be adjusted based on tax, corporate, securities, and accounting considerations.

About Smartkem

Smartkem is revolutionizing electronics with a new class of transistors developed using its proprietary semiconductor materials, Jericho said in the release. Its TRUFLEX® semiconductor polymers enable low-temperature printing processes compatible with existing manufacturing infrastructure, delivering low-cost, high-performance displays. The platform is applicable in various display technologies, including MicroLED, LCD, and AMOLED, as well as advanced computer and AI chip packaging, sensors, and logic.

Smartkem designs and develops its materials at its R&D facility in Manchester, U.K., and offers prototyping services at the Centre for Process Innovation (CPI) in Sedgefield, U.K. It also operates a field application office in Hsinchu, Taiwan, near its collaboration partner, The Industrial Technology Research Institute (ITRI).

Smartkem is developing a commercial-scale production process and Electronic Design Automation (EDA) tools to demonstrate the commercial viability of manufacturing a new generation of displays using its materials.

The company holds an extensive IP portfolio, including 140 granted patents across 17 patent families, 14 pending patents, and 40 codified trade secrets. For more information, visit the Smartkem website or follow them on LinkedIn.

JEV’s Data Center Initiative

Earlier this year, Jericho launched its data center initiative, strategically leveraging its expansive 41,000-acre portfolio of active oil and gas joint venture properties in Oklahoma. By harnessing abundant, low-cost on-site natural gas, JEV is transforming its energy assets into secure, scalable, high-performance AI computing hubs tailored for the AI era.

JEV’s build-to-suit (BTS) data centers capitalize on the company’s extensive network of over 60 miles of gas, power, and water infrastructure, along with prime positioning on a U.S. fiber “superhighway,” to offer unparalleled connectivity and performance.

In July, Jericho announced a memorandum of understanding (MOU) with M2 Development Solutions LLC to accelerate the development of AI data centers across the United States. Finalized on July 6, the agreement expands Jericho’s reach beyond its Oklahoma asset base into Ohio and Nevada, utilizing M2’s large-scale development sites.

The Ohio location spans 400 acres and includes access to utility power and on-site natural gas power generation assets. In Nevada, the 3,700-acre site offers a diverse energy mix, including utility power access, on-site geothermal and solar capabilities, and natural gas-fed power generation. These features provide energy diversification options at a scale suitable for AI data center operations, which demand substantial and reliable power sources.

“Our partnership with M2 is a transformative step in executing our AI data center strategy,” said Williamson at the time. “Integrating M2’s gigawatt-scale sites accelerates our ability to deliver scalable, energy-efficient infrastructure for modern AI workloads.”

The Catalyst: We’re Consuming More Electricity Than Ever

In a significant shift from nearly two decades of stagnant U.S. load growth, Americans are now consuming more electricity than ever, according to a report by ICF International. The rapid expansion of data centers to support AI technology, along with a surge in new manufacturing and oil and gas production, is driving a notable increase in industrial electricity demand.

Additionally, electric vehicles, heat pumps, and other energy-intensive products are further contributing to this growth. ICF’s analysis suggests that U.S. electricity demand is expected to rise by 25% by 2030 and by 78% by 2050, compared to 2023 levels. This surge in demand has significant implications for the reliability and affordability of electricity. For residential customers, electricity rates could increase by 15% to 40% by 2030, depending on the market. By 2050, some rates might even double.

Streetwise Ownership Overview*

Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA)


In a piece for U.S. Global Investors dated July 25, Frank Holmes compared the current AI advancements to the scale and ambition of the defense expansion during the Reagan era or the shale boom of the 2010s.

According to Grand View Research, the global data center market size was estimated at US$347.6 billion in 2024 and is projected to reach US$652.01 billion by 2030, growing at a compound annual growth rate (CAGR) of 11.2% from 2025 to 2030. “The rapid adoption of digital transformation initiatives, cloud computing, and emerging technologies such as artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT) have substantially increased demand,” Holmes noted.

Ownership and Share Structure

Around 41% of Jericho’s shares are held by management and insiders, the company said. They include CEO Brian Williamson, who owns 1.38%; founder Allen Wilson, who owns 0.99%; and board member Nicholas Baxter, who owns 0.49%; according to Refinitiv’s latest research.

Around 34% of shares are held by the company’s “Top 10 external shareholders.” The rest is in retail.

JEV’s market cap is CA$35.07 million, and it trades in a 52-week range of CA$0.08 and CA$0.21. It has 304.03 million shares outstanding, about 220.98 million floating.

 

Important Disclosures:

  1. As of the date of this article, officers and/or employees of Streetwise Reports LLC (including members of their household) own securities of Jericho Energy Ventures Inc.
  2. Steve Sobek wrote this article for Streetwise Reports LLC and provides services to Streetwise Reports as an employee.
  3. This article does not constitute investment advice and is not a solicitation for any investment. Streetwise Reports does not render general or specific investment advice and the information on Streetwise Reports should not be considered a recommendation to buy or sell any security. Each reader is encouraged to consult with his or her personal financial adviser and perform their own comprehensive investment research. By opening this page, each reader accepts and agrees to Streetwise Reports’ terms of use and full legal disclaimer. Streetwise Reports does not endorse or recommend the business, products, services or securities of any company.

For additional disclosures, please click here.

Today’s AI hype has echoes of a devastating technology boom and bust 100 years ago

By Cameron Shackell, Queensland University of Technology 

The electrification boom of the 1920s set the United States up for a century of industrial dominance and powered a global economic revolution.

But before electricity faded from a red-hot tech sector into invisible infrastructure, the world went through profound social change, a speculative bubble, a stock market crash, mass unemployment and a decade of global turmoil.

Understanding this history matters now. Artificial intelligence (AI) is a similar general purpose technology and looks set to reshape every aspect of the economy. But it’s already showing some of the hallmarks of electricity’s rise, peak and bust in the decade known as the Roaring Twenties.

The reckoning that followed could be about to repeat.

A crowd gathers outside the New York Stock Exchange following the ‘Great Crash’ of October 1929.
New York World-Telegram and the Sun Newspaper Photograph Collection, US Library of Congress

First came the electricity boom

A century ago, when people at the New York Stock Exchange talked about the latest “high tech” investments, they were talking about electricity.

Investors poured money into suppliers such as Electric Bond & Share and Commonwealth Edison, as well as companies using electricity in new ways, such as General Electric (for appliances), AT&T (telecommunications) and RCA (radio).

It wasn’t a hard sell. Electricity brought modern movies, new magazines from faster printing presses, and evenings by the radio.

It was also an obvious economic game changer, promising automation, higher productivity, and a future full of leisure and consumption. In 1920, even Soviet revolutionary leader Vladimir Lenin declared: “Communism is Soviet power plus the electrification of the whole country.”

Today, a similar global urgency grips both communist and capitalist countries about AI, not least because of military applications.

Then came the peak

Like AI stocks now, electricity stocks “became favorites in the boom even though their fundamentals were difficult to assess”.

Market power was concentrated. Big players used complex holding structures to dodge rules and sell shares in basically the same companies to the public under different names.

US finance professor Harold Bierman, who argued that attempts to regulate overpriced utility stocks were a direct trigger for the crash, estimated that utilities made up 18% of the New York Stock Exchange in September 1929. Within electricity supply, 80% of the market was owned by just a handful of holding firms.

But that’s just the utilities. As today with AI, there was a much larger ecosystem.

Almost every 1920s “megacap” (the largest companies at the time) owed something to electrification. General Motors, for example, had overtaken Ford using new electric production techniques.

Essentially, electricity became the backdrop to the market in the same way AI is doing, as businesses work to become “AI-enabled”.

No wonder that today tech giants command over a third of the S&P 500 index and nearly three-quarters of the NASDAQ. Transformative technology drives not only economic growth, but also extreme market concentration.

In 1929, to reflect the new sector’s importance, Dow Jones launched the last of its three great stock averages: the electricity-heavy Dow Jones Utilities Average.

But then came the bust

The Dow Jones Utilities Average went as high as 144 in 1929. But by 1934, it had collapsed to just 17.

No single cause explains the New York Stock Exchange’s unprecedented “Great Crash”, which began on October 24 1929 and preceded the worldwide Great Depression.

That crash triggered a banking crisis, credit collapse, business failures, and a drastic fall in production. Unemployment soared from just 3% to 25% of US workers by 1933 and stayed in double figures until the US entered the second world war in 1941.

Lithograph of Wall Street, New York City, with panicked crowd, lightning, people jumping out of buildings, buildings falling, at time of stock market crash in 1929.
Lithograph of Wall Street, New York City, after the 1929 stock market crash. Jame Rosenberg, Ben and Beatrice Goldstein Foundation collection, US Library of Congress

The ripple effects were global, with most countries seeing a rise in unemployment, especially in countries reliant on international trade, such as Chile, Australia and Canada, as well as Germany.

The promised age of shorter hours and electric leisure turned into soup kitchens and bread lines.

The collapse exposed fraud and excess. Electricity entrepreneur Samuel Insull, once Thomas Edison’s protégé and builder of Chicago’s Commonwealth Edison, was at one point worth US$150 million – an even more staggering amount at the time.

But after Insull’s empire went bankrupt in 1932, he was indicted for embezzlement and larceny. He fled overseas, was brought back, and eventually acquitted – but 600,000 shareholders and 500,000 bondholders lost everything.

However, to some Insull seemed less a criminal mastermind than a scapegoat for a system whose flaws ran far deeper.

Reforms unthinkable during the boom years followed.

The Public Utility Holding Company Act of 1935 broke up the huge holding company structures and imposed regional separation. Once exciting electricity darlings became boring regulated infrastructure: a fact reflected in the humble “Electric Company” square on the original 1935 Monopoly board.

Lessons from the 1920s for today

AI is rolling out faster than even those seeking to use it for business or government policy can sometimes manage properly.

Like electricity a century ago, a few interconnected firms are building today’s AI infrastructure.

And like a century ago, investors are piling in – though many don’t know the extent of their exposure through their superannuation funds or exchange traded funds (ETFs).

Just as in the late 1920s, today’s regulation of AI is still loose in many parts of the world – though the European Union is taking a tougher approach with its world-first AI law.

US President Donald Trump has taken the opposite approach, actively cutting “onerous regulation” of AI. Some US states have responded by taking action themselves. The courts, when consulted, are hamstrung by laws and definitions written for a different era.

Can we transition to AI being invisible infrastructure like electricity without a another bust, only then followed by reform?

If the parallels to the electrification boom remain unnoticed, the chances are slim.The Conversation

About the Author:

Cameron Shackell, Sessional Academic, School of Information Systems, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Book Review: Hands-On AI Trading with Python, QuantConnect and AWS

AI is all the rage these days. We know this! But as investors and traders, do we know how to incorporate AI into our systems? Do we even know the many possible ways we could use AI to help our trading? Well, today I am going to do bring something a little bit different to the blog, a quick book review!

As a Python coder, automated trader and investor, I feel constantly bombarded with bits and pieces of AI trading information from newsletters or ‘how to’ tutorials to implement this or that. Luckily, I was recently given a complimentary copy of Hands-On AI Trading with Python, QuantConnect and AWS and, it turns out, this book is a comprehensive guide that brings a whole lot of information into one place with a consistent presentation and coding style.

Front cover of Hands-On AI Trading

Basic Information:

This book was written by five active data-driven market professionals that all run businesses or have positions that are aligned to the financial markets and/or using AI and automated solutions. Jiri Pik is the CEO of RocketEdge.com, Jared Broad is the founder and CEO of QuantConnect, Ernest Chan is the founder of PredictNow.AI, Philip Sun is the CEO of Adaptive Investment Solutions and Vivek Singh previously worked at a hedge fund and is now a senior product manager at AWS.

This book is targeted towards those in finance, aspiring quants, veteran quants, hedge fund traders, as well as independent traders & investors. As you can tell from the book’s title, there’s a focus on using the Python programming language as well as the services of QuantConnect, Amazon Web Services (AWS), and Predictnow.ai.

The authors present these specific tools (QuantConnect, AWS, Predictnow.ai) as a tech-stack to get things from start to finish. As stated in the book, the goal was to provide, “an easy-to-setup and use environment where readers could instantly experiment with the algorithms to build their confidence without spending any time setting up the required infrastructure.” In other words, the reader has an opportunity to go from the learning, creating and testing phase (with code and AI models) to potentially working through to a live strategy trading (through QuantConnect and their connected brokers).

I found the book to be well organized and it is structured into 3 main parts.

Part 1 is about the Capital Markets and Quantitative Trading.

Part one quickly brings those unfamiliar with the financial markets up to speed. It covers various topics from the different types of markets traded to the mechanics of how things work in the market ecosystem. This includes all the different types of participants, the different roles they play, the different types of orders these traders use as well as who has unique types of informed access. The authors go further through derivatives, futures, charting, crypto and more.

The quantitative analysis and trading part of this section brings a comprehensive overview of quantitative trader functions using QuantConnect and Python code. It details the steps, processes, and aspects that quants will go through, experience and need to consider for a successful process. I think this section will be very beneficial for aspiring and seasoned quant traders alike, as this book does a great job of laying out the market framework and the quantitative trading landscape.

ai python trading image

Image from example in Hands-On AI Trading.

Part 2 goes into AI and Machine Learning (ML) in Algorithmic Trading.

Part two focuses on AI-based algorithmic trading. Here, you start to address the market prediction, forecasting or other specific problems you’re trying to solve. You proceed step by step, breaking down issues and finding solutions using AI and machine learning processes. It details the data set preparation, handling data, creating features, and splitting datasets into training and testing phases.

If you are unfamiliar with AI models – this section (especially Chapter 4) is for you as it delves into models like linear regression, Markov, Bayes, decision trees, support vector machines, neural networks, and many more. Found alongside these characteristics and concepts is the Python code you can use for these different types of quant functions.

Part 3 delves into Advanced Applications of AI in Trading and Risk Management.

Finally, part three discusses using these AI models in real trading and investing scenarios. The authors provide 19 specific examples and this is where I think the main strength of this book lies. These examples illustrate different aspects of the investment game or problems that are solved using various AI models for major financial markets (FX, stocks, etc.). These examples, once understood, ideally can form the basis for many new ideas, as well as just understanding how these pros go about it. Also, the Python code is included for these examples.

For instance, one of my favorite examples (#8) was just a simple exercise in using a stop-loss based on historical volatility (and drawdown recovery). This example used a LASSO regression model with features including the VIX, Average True Range (of n months) and Standard Deviation (of n months). The example used a few different methods to test variations of a dynamic stop-loss order to varying degrees of success. This type of example represents a common problem most traders come into when working through their strategies.

The examples also give interesting ideas on how to use AI and models in use cases beyond just trying to predict future price returns.

Overall Takeaway: 

I thought this book was well done and is the best book that bridges quant trading and AI together that I have read so far. I think a lot of the AI and machine learning aspects were explained and guided in a clear, concise, and a well-organized way, since it’s very easy to get lost in the weeds with this subject.

The breadth of coverage among these many strategies, concepts, and factors involved is admirable, covering all the way from data acquisition and programming to the role of generative AI. There’s a lot to unpack. There’s a lot to learn. I think it’s a testament to the authors that they created a book that covers so much. There’s also a github repository for the examples.

I would recommend this book for any aspiring quant traders or programmers, or anyone who is interested in the understanding of these markets, especially in how quant trading and AI intersect. I would also recommend it for traders looking for examples of AI in trading or finding new ideas to implement AI strategies.

Disclaimer: Complimentary book copy was provided by Wiley.


Article written by Zac@InvestMacro

 

AI is transforming weather forecasting − and that could be a game changer for farmers around the world

By Paul Winters, University of Notre Dame and Amir Jina, University of Chicago 

For farmers, every planting decision carries risks, and many of those risks are increasing with climate change. One of the most consequential is weather, which can damage crop yields and livelihoods. A delayed monsoon, for example, can force a rice farmer in South Asia to replant or switch crops altogether, losing both time and income.

Access to reliable, timely weather forecasts can help farmers prepare for the weeks ahead, find the best time to plant or determine how much fertilizer will be needed, resulting in better crop yields and lower costs.

Yet, in many low- and middle-income countries, accurate weather forecasts remain out of reach, limited by the high technology costs and infrastructure demands of traditional forecasting models.

A new wave of AI-powered weather forecasting models has the potential to change that.

By using artificial intelligence, these models can deliver accurate, localized predictions at a fraction of the computational cost of conventional physics-based models. This makes it possible for national meteorological agencies in developing countries to provide farmers with the timely, localized information about changing rainfall patterns that the farmers need.

The challenge is getting this technology where it’s needed.

Why AI forecasting matters now

The physics-based weather prediction models used by major meteorological centers around the world are powerful but costly. They simulate atmospheric physics to forecast weather conditions ahead, but they require expensive computing infrastructure. The cost puts them out of reach for most developing countries.

Moreover, these models have mainly been developed by and optimized for northern countries. They tend to focus on temperate, high-income regions and pay less attention to the tropics, where many low- and middle-income countries are located.

A major shift in weather models began in 2022 as industry and university researchers developed deep learning models that could generate accurate short- and medium-range forecasts for locations around the globe up to two weeks ahead.

These models worked at speeds several orders of magnitude faster than physics-based models, and they could run on laptops instead of supercomputers. Newer models, such as Pangu-Weather and GraphCast, have matched or even outperformed leading physics-based systems for some predictions, such as temperature.

A woman in a red sari tosses pellets into a rice field.
A farmer distributes fertilizer in India.
EqualStock IN from Pexels

AI-driven models require dramatically less computing power than the traditional systems.

While physics-based systems may need thousands of CPU hours to run a single forecast cycle, modern AI models can do so using a single GPU in minutes once the model has been trained. This is because the intensive part of the AI model training, which learns relationships in the climate from data, can use those learned relationships to produce a forecast without further extensive computation – that’s a major shortcut. In contrast, the physics-based models need to calculate the physics for each variable in each place and time for every forecast produced.

While training these models from physics-based model data does require significant upfront investment, once the AI is trained, the model can generate large ensemble forecasts — sets of multiple forecast runs — at a fraction of the computational cost of physics-based models.

Even the expensive step of training an AI weather model shows considerable computational savings. One study found the early model FourCastNet could be trained in about an hour on a supercomputer. That made its time to presenting a forecast thousands of times faster than state-of-the-art, physics-based models.

The result of all these advances: high-resolution forecasts globally within seconds on a single laptop or desktop computer.

Research is also rapidly advancing to expand the use of AI for forecasts weeks to months ahead, which helps farmers in making planting choices. AI models are already being tested for improving extreme weather prediction, such as for extratropical cyclones and abnormal rainfall.

Tailoring forecasts for real-world decisions

While AI weather models offer impressive technical capabilities, they are not plug-and-play solutions. Their impact depends on how well they are calibrated to local weather, benchmarked against real-world agricultural conditions, and aligned with the actual decisions farmers need to make, such as what and when to plant, or when drought is likely.

To unlock its full potential, AI forecasting must be connected to the people whose decisions it’s meant to guide.

That’s why groups such as AIM for Scale, a collaboration we work with as researchers in public policy and sustainability, are helping governments to develop AI tools that meet real-world needs, including training users and tailoring forecasts to farmers’ needs. International development institutions and the World Meteorological Organization are also working to expand access to AI forecasting models in low- and middle-income countries.

AI forecasts can be tailored to context-specific agricultural needs, such as identifying optimal planting windows, predicting dry spells or planning pest management. Disseminating those forecasts through text messages, radio, extension agents or mobile apps can then help reach farmers who can benefit. This is especially true when the messages themselves are constantly tested and improved to ensure they meet the farmers’ needs.

A recent study in India found that when farmers there received more accurate monsoon forecasts, they made more informed decisions about what and how much to plant – or whether to plant at all – resulting in better investment outcomes and reduced risk.

A new era in climate adaptation

AI weather forecasting has reached a pivotal moment. Tools that were experimental just five years ago are now being integrated into government weather forecasting systems. But technology alone won’t change lives.

With support, low- and middle-income countries can build the capacity to generate, evaluate and act on their own forecasts, providing valuable information to farmers that has long been missing in weather services.The Conversation

About the Author:

Paul Winters, Professor of Sustainable Development, University of Notre Dame and Amir Jina, Assistant Professor of Public Policy, University of Chicago

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI is advancing even faster than sci-fi visionaries like Neal Stephenson imagined

By Rizwan Virk, Arizona State University 

Every time I read about another advance in AI technology, I feel like another figment of science fiction moves closer to reality.

Lately, I’ve been noticing eerie parallels to Neal Stephenson’s 1995 novel “The Diamond Age: Or, A Young Lady’s Illustrated Primer.”

“The Diamond Age” depicted a post-cyberpunk sectarian future, in which society is fragmented into tribes, called phyles. In this future world, sophisticated nanotechnology is ubiquitous, and a new type of AI is introduced.

Though inspired by MIT nanotech pioneer Eric Drexler and Nobel Prize winner Richard Feynman, the advanced nanotechnology depicted in the novel still remains out of reach. However, the AI that’s portrayed, particularly a teaching device called the Young Lady’s Illustrated Primer, isn’t only right in front of us; it also raises serious issues about the role of AI in labor, learning and human behavior.

In Stephenson’s novel, the Primer looks like a hardcover book, but each of its “pages” is really a screen display that can show animations and text, and it responds to its user in real time via AI. The book also has an audio component, which voices the characters and narrates stories being told by the device.

It was originally created for the young daughter of an aristocrat, but it accidentally falls into the hands of a girl named Nell who’s living on the streets of a futuristic Shanghai. The Primer provides Nell personalized emotional, social and intellectual support during her journey to adulthood, serving alternatively as an AI companion, a storyteller, a teacher and a surrogate parent.

The AI is able to weave fairy tales that help a younger Nell cope with past traumas, such as her abusive home and life on the streets. It educates her on everything from math to cryptography to martial arts. In a techno-futuristic homage to George Bernard Shaw’s 1913 play “Pygmalion,” the Primer goes so far as to teach Nell the proper social etiquette to be able to blend into neo-Victorian society, one of the prominent tribes in Stephenson’s balkanized world.

No need for ‘ractors’

Three recent developments in AI – in video games, wearable technology and education – reveal that building something like the Primer should no longer be considered the purview of science fiction.

In May 2025, the hit video game “Fortnite” introduced an AI version of Darth Vader, who speaks with the voice of the late James Earl Jones.

While it was popular among fans of the game, the Screen Actors Guild lodged a labor complaint with Epic Games, the creator of “Fortnite.” Even though Epic had received permission from the late actor’s estate, the Screen Actors Guild pointed out that actors could have been hired to voice the character, and the company – in refusing to alert the union and negotiate terms – violated existing labor agreements.

In “The Diamond Age,” while the Primer uses AI to generate the fairy tales that train Nell, for the voices of these archetypal characters, Stephenson concocted a low-tech solution: The characters are played by a network of what he termed “ractors” – real actors working in a studio who are contracted to perform and interact in real time with users.

The Darth Vader “Fortnite” character shows that a Primer built today wouldn’t need to use actors at all. It could rely almost entirely on AI voice generation and have real-time conversations, showing that today’s technology already exceeds Stephenson’s normally far-sighted vision.

Recording and guiding in real time

Synthesizing James Earl Jones’ voice in “Fortnite” wasn’t the only recent AI development heralding the arrival of Primer-like technology.

I recently witnessed a demonstration of wearable AI that records all of the wearer’s conversations. Their words are then sent to a server so they can be analyzed by AI, providing both summaries and suggestions to the user about future behavior.

Several startups are making these “always on” AI wearables. In an April 29, 2025, essay titled “I Recorded Everything I Said for Three Months. AI Has Replaced My Memory,” Wall Street Journal technology columnist Joanna Stern describes the experience of using this technology. She concedes that the assistants created useful summaries of her conversations and meetings, along with helpful to-do lists. However, they also recalled “every dumb, private and cringeworthy thing that came out of my mouth.”

AI wearable devices that continuously record the conversations of their users have recently hit the market.

These devices also create privacy issues. The people whom the user interacts with don’t always know they are being recorded, even as their words are also sent to a server for the AI to process them. To Stern, the technology’s potential for mass surveillance becomes readily apparent, presenting a “slightly terrifying glimpse of the future.”

Relying on AI engines such as ChatGPT, Claude and Google’s Gemini, the wearables work only with words, not images. Behavioral suggestions occur only after the fact. However, a key function of the Primer – coaching users in real time in the middle of any situation or social interaction – is the next logical step as the technology advances.

Education or social engineering?

In “The Diamond Age,” the Primer doesn’t simply weave interactive fairy tales for Nell. It also assumes the responsibility of educating her on everything from her ABCs when younger to the intricacies of cryptography and politics as she gets older.

It’s no secret that AI tools, such as ChatGPT, are now being widely used by both teachers and students.

Several recent studies have shown that AI may be more effective than humans at teaching computer science. One survey found that 85% of students said ChatGPT was more effective than a human tutor. And at least one college, Morehouse College in Atlanta, is introducing an AI teaching assistant for professors.

There are certainly advantages to AI tutors: Tutoring and college tuition can be exorbitantly expensive, and the technology can offer better access to education to people of all income levels.

Pulling together these latest AI advances – interactive avatars, behavioral guides, tutors – it’s easy to envision how an AI device like the Young Lady’s Illustrated Primer could be created in the near future. A young person might have a personalized AI character that accompanies them at all times. It can teach them about the world and offer up suggestions for how to act in certain situations. The AI could be tailored to a child’s personality, concocting stories that include AI versions of their favorite TV and movie characters.

But “The Diamond Age” offers a warning, too.

Toward the end of the novel, a version of the Primer is handed out to hundreds of thousands of young Chinese girls who, like Nell, didn’t have access to education or mentors. This leads to the education of the masses. But it also opens the door to large-scale social engineering, creating an army of Primer-raised martial arts experts, whom the AI then directs to act on behalf of “Princess Nell,” Nell’s fairy tale name.

It’s easy to see how this sort of large-scale social engineering could be used to target certain ideologies, crush dissent or build loyalty to a particular regime. The AI’s behavior could also be subject to the whims of the companies or individuals that created it. A ubiquitous, always-on, friendly AI could become the ultimate monitoring and reporting device. Think of a kinder, gentler face for Big Brother that people have trusted since childhood.

While large-scale deployment of a Primer-like AI could certainly make young people smarter and more efficient, it could also hamper one of the most important parts of education: teaching people to think for themselves.The Conversation

About the Author:

Rizwan Virk, Faculty Associate, PhD Candidate in Human and Social Dimensions of Science and Technology, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The hidden cost of convenience: How your data pulls in hundreds of billions of dollars for app and social media companies

By Kassem Fawaz, University of Wisconsin-Madison and Jack West, University of Wisconsin-Madison 

You wake up in the morning and, first thing, you open your weather app. You close that pesky ad that opens first and check the forecast. You like your weather app, which shows hourly weather forecasts for your location. And the app is free!

But do you know why it’s free? Look at the app’s privacy settings. You help keep it free by allowing it to collect your information, including:

  • What devices you use and their IP and Media Access Control addresses.
  • Information you provide when signing up, such as your name, email address and home address.
  • App settings, such as whether you choose Celsius or Fahrenheit.
  • Your interactions with the app, including what content you view and what ads you click.
  • Inferences based on your interactions with the app.
  • Your location at a given time, including, depending on your settings, continuous tracking.
  • What websites or apps that you interact with after you use the weather app.
  • Information you give to ad vendors.
  • Information gleaned by analytics vendors that analyze and optimize the app.

This type of data collection is standard fare. The app company can use this to customize ads and content. The more customized and personalized an ad is, the more money it generates for the app owner. The owner might also sell your data to other companies.

Screenshot from an android phone with the default opt-in selection radio button filled in
Many apps, including the weather channel app, send you targeted advertising and sell your personal data by default.
Jack West, CC BY-ND

You might also check a social media account like Instagram. The subtle price that you pay is, again, your data. Many “free” mobile apps gather information about you as you interact with them.

As an associate professor of electrical and computer engineering and a doctoral student in computer science, we follow the ways software collects information about people. Your data allows companies to learn about your habits and exploit them.

It’s no secret that social media and mobile applications collect information about you. Meta’s business model depends on it. The company, which operates Facebook, Instagram and WhatsApp, is worth US$1.48 trillion. Just under 98% of its profits come from advertising, which leverages user data from more than 7 billion monthly users.

What your data is worth

Before mobile phones gained apps and social media became ubiquitous, companies conducted large-scale demographic surveys to assess how well a product performed and to get information about the best places to sell it. They used the information to create coarsely targeted ads that they placed on billboards, print ads and TV spots.

Mobile apps and social media platforms now let companies gather much more fine-grained information about people at a lower cost. Through apps and social media, people willingly trade personal information for convenience. In 2007 – a year after the introduction of targeted ads – Facebook made over $153 million, triple the previous year’s revenue. In the past 17 years, that number has increased by more than 1,000 times.

Five ways to leave your data

App and social media companies collect your data in many ways. Meta is a representative case. The company’s privacy policy highlights five ways it gathers your data:

First, it collects the profile information you fill in. Second, it collects the actions you take on its social media platforms. Third, it collects the people you follow and friend. Fourth, it keeps track of each phone, tablet and computer you use to access its platforms. And fifth, it collects information about how you interact with apps that corporate partners connect to its platforms. Many apps and social media platforms follow similar privacy practices.

Your data and activity

When you create an account on an app or social media platform, you provide the company that owns it with information like your age, birth date, identified sex, location and workplace. In the early years of Facebook, selling profile information to advertisers was that company’s main source of revenue. This information is valuable because it allows advertisers to target specific demographics like age, identified gender and location.

And once you start using an app or social media platform, the company behind it can collect data about how you use the app or social media. Social media keeps you engaged as you interact with other people’s posts by liking, commenting or sharing them. Meanwhile, the social media company gains information about what content you view and how you communicate with other people.

Advertisers can find out how much time you spent reading a Facebook post or that you spent a few more seconds on a particular TikTok video. This activity information tells advertisers about your interests. Modern algorithms can quickly pick up subtleties and automatically change the content to engage you in a sponsored post, a targeted advertisement or general content.

Your devices and applications

Companies can also note what devices, including mobile phones, tablets and computers, you use to access their apps and social media platforms. This shows advertisers your brand loyalty, how old your devices are and how much they’re worth.

Because mobile devices travel with you, they have access to information about where you’re going, what you’re doing and who you’re near. In a lawsuit against Kochava Inc., the Federal Trade Commission called out the company for selling customer geolocation data in August 2022, shortly after Roe v Wade was overruled. The company’s customers, including people who had abortions after the ruling was overturned, often didn’t know that data tracking their movements was being collected, according to the commission. The FTC alleged that the data could be used to identify households.

Kochava has denied the FTC’s allegations.

Information that apps can gain from your mobile devices includes anything you have given an app permission to have, such as your location, who you have in your contact list or photos in your gallery.

If you give an app permission to see where you are while the app is running, for instance, the platform can access your location anytime the app is running. Providing access to contacts may provide an app with the phone numbers, names and emails of all the people that you know.

Cross-application data collection

Companies can also gain information about what you do across different apps by acquiring information collected by other apps and platforms.

Android screenshot – white and green text on a black background
The settings on an Android phone show that Meta uses information it collects about you to target ads it shows you in its apps – and also in other apps and on other platforms – by default.
Jack West, CC BY-ND

This is common with social media companies. This allows companies to, for example, show you ads based on what you like or recently looked at on other apps. If you’ve searched for something on Amazon and then noticed an ad for it on Instagram, it’s probably because Amazon shared that information with Instagram.

This combined data collection has made targeted advertising so accurate that people have reported that they feel like their devices are listening to them.

Companies, including Google, Meta, X, TikTok and Snapchat, can build detailed user profiles based on collected information from all the apps and social media platforms you use. They use the profiles to show you ads and posts that match your interests to keep you engaged. They also sell the profile information to advertisers.

Meanwhile, researchers have found that Meta and Yandex, a Russian search engine, have overcome controls in mobile operating system software that ordinarily keep people’s web-browsing data anonymous. Each company puts code on its webpages that used local IPs to pass a person’s browsing history, which is supposed to remain private, to mobile apps installed on that person’s phone, de-anonymizing the data. Yandex has been conducting this tracking since 2017, while Meta began in September 2024, according to the researchers.

What you can do about it

If you use apps that collect your data in some way, including those that give you directions, track your workouts or help you contact someone, or if you use social media platforms, your privacy is at risk.

Aside from entirely abandoning modern technology, there are several steps you can take to limit access – at least in part – to your private information.

Read the privacy policy of each app or social media platform you use. Although privacy policy documents can be long, tedious and sometimes hard to read, they explain how social media platforms collect, process, store and share your data.

Check a policy by making sure it can answer three questions: what data does the app collect, how does it collect the data, and what is the data used for. If you can’t answer all three questions by reading the policy, or if any of the answers don’t sit well with you, consider skipping the app until there’s a change in its data practices.

Remove unnecessary permissions from mobile apps to limit the amount of information that applications can gather from you.

Be aware of the privacy settings that might be offered by the apps or social media platforms you use, including any setting that allows your personal data to affect your experience or shares information about you with other users or applications.

These privacy settings can give you some control. We recommend that you disable “off-app activity” and “personalization” settings. “Off-app activity” allows an app to record which other apps are installed on your phone and what you do on them. Personalization settings allow an app to use your data to tailor what it shows you, including advertisements.

Review and update these settings regularly because permissions sometimes change when apps or your phone update. App updates may also add new features that can collect your data. Phone updates may also give apps new ways to collect your data or add new ways to preserve your privacy.

Use private browser windows or reputable virtual private networks software, commonly referred to as VPNs, when using apps that connect to the internet and social media platforms. Private browsers don’t store any account information, which limits the information that can be collected. VPNs change the IP address of your machine so that apps and platforms can’t discover your location.

Finally, ask yourself whether you really need every app that’s on your phone. And when using social media, consider how much information you want to reveal about yourself in liking and commenting on posts, sharing updates about your life, revealing locations you visited and following celebrities you like.


This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it.The Conversation

Kassem Fawaz, Associate Professor of Electrical and Computer Engineering, University of Wisconsin-Madison and Jack West, PhD Student in Computer Science, University of Wisconsin-Madison

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Valutico Acquires AI Innovator Paraloq Analytics to Revolutionize Private Company Analysis

VIENNA, Austria – JUNE 19, 2025 – Valutico, a global leader in valuation and financial analysis software, today announced its strategic acquisition of Paraloq Analytics, a Vienna-based artificial intelligence (AI) specialist. This acquisition will integrate Paraloq Analytics’ advanced AI capabilities into Valutico’s renowned platform, empowering financial professionals with unprecedented data-driven insights and efficiency.

The two Vienna-headquartered companies have previously cooperated on the development of Done Diligence, an innovative tool that uses advanced AI agents to empower humans to perform due diligence work more efficiently. Now the companies are joining forces to create a powerhouse to further drive digital transformation in the Financial Services and Banking industries. By embedding AI-driven analytics and enhanced data interpretation into its platform, Valutico will offer its global client base even more robust, accurate, and forward-looking valuation solutions.

“We are thrilled to welcome Paraloq Analytics to the Valutico family,” said Paul Resch, CEO of Valutico. “Paraloq’s deep and long standing experience with AI, particularly in the Banking sector, perfectly complements our mission to provide the most sophisticated and user-friendly financial analysis platform on the market. This acquisition will significantly accelerate our product roadmap, bringing next-generation intelligence to our customers and further solidifying our leadership position in the space.”

Paraloq Analytics, founded in 2019 by two Econometrics PhD candidates of the University of St.Gallen, has quickly established itself as an innovator in applying AI to complex challenges in Banking and related fields. Their expertise in areas such as econometrics, machine learning, and AI software development will be instrumental in enhancing Valutico’s data analytics capabilities and augmenting its users’ experience with analysing qualitative information.

“Joining forces with Valutico is an exciting new chapter for Paraloq Analytics,” said Paraloq Co-Founder Maximilian Arrich. “Valutico’s global reach and established platform provide the perfect launchpad for our AI technologies. Over the past year of working together, we built a common vision for the future of financial analysis – one that is more data-driven, intelligent, and efficient. We are eager to contribute our expertise to create truly transformative tools for Finance professionals.”

Strategic Benefits of the Acquisition:

  • Enhanced AI-Powered Insights: Integration of Paraloq’s technology will complement Valutico’s analysis of structured data (e.g. financial information) with diverse sources of unstructured data (e.g. contents of a virtual data room, news, social media, etc)

  • Market Access: Valutico’s global reach will accelerate the roll out of Paraloq’s technology to new client verticals and geographies

  • Talent Acquisition: The Paraloq team will complement the Valutico family and further strengthen its AI capabilities

  • Innovation Acceleration: The combined expertise will fast-track the development of new, cutting-edge features for Valutico users.

Valutico will begin integrating Paraloq Analytics’ technology and team immediately, with Paraloq founder Maxilian Arrich joining Valutico’s management team as VP of AI Research. Clients can expect to see an acceleration of AI-enhanced feature rollouts in upcoming platform updates.

Terms of the acquisition were not disclosed.

About Valutico:

Valutico is a leading global provider of business valuation software. Founded in 2017, Valutico empowers financial professionals and valuation experts in over 90 countries to perform high-quality and efficient valuations with its comprehensive data, automated financial models, and intuitive platform. Valutico is headquartered in Vienna, Austria, with offices in the UK, US, Germany, the Netherlands and Singapore.

 

About Paraloq Analytics:

Paraloq Analytics is a Vienna-based company founded in 2019, specializing in artificial intelligence, machine learning, and econometric solutions for the Banking industry. Paraloq helps businesses unlock the power of their data by developing and implementing bespoke AI-driven software and providing expert data science and AI consulting.

AI helps tell snow leopards apart, improving population counts for these majestic mountain predators

By Eve Bohnett, University of Florida 

Snow leopards are known as the “ghosts of the mountains” for a reason. Imagine waiting for months in the harsh, rugged mountains of Asia, hoping to catch even a glimpse of one. These elusive big cats move silently across rocky slopes, their pale coats blending so seamlessly with snow and stone that even the most seasoned biologists seldom spot them in the wild.

Travel writer Peter Matthiessen spent two months in 1973 searching the Tibetan plateau for them and wrote a 300-page book about the effort. He never saw one. Forty years later, Peter’s son Alex retraced his father’s steps – and didn’t see one either.

Researchers have struggled to come up with a figure for the global population. In 2017, the International Union for Conservation of Nature reclassified the snow leopard from endangered to vulnerable, citing estimates of between 2,500 and 10,000 adults in the wild. However, the group also warned that numbers continue to decline in many areas due to habitat loss, poaching and human-wildlife conflict. Those who study these animals want to help protect the species and their habitat – if only we can determine exactly where they live and how many there are.

Traditional tracking methods – searching for footprints, droppings and other signs – have their limits. Instead of waiting for a lucky face-to-face encounter, conservationists from the Wildlife Conservation Society, led by experts including Stéphane Ostrowski and Sorosh Poya Faryabi, began deploying automated camera traps in Afghanistan. These devices snap photos whenever movement is detected, capturing thousands of images over months, all in hopes of obtaining a rare glimpse of a snow leopard.

But capturing images is only half the battle. The next, even harder task is telling one snow leopard apart from another.

Two images of snow leopards.
Are these the same animal or different ones? It’s really hard to tell.
Eve Bohnett, CC BY-ND

At first glance, it might sound simple: Each snow leopard has a unique pattern of black rosettes on its coat, like a fingerprint or a face in a crowd. Yet in practice, identifying individuals by these patterns is slow, subjective and prone to error. Photos may be taken at odd angles, under poor lighting, or with parts of the animal obscured – making matches tricky.

A common mistake happens when photos from different cameras are marked as depicting different animals when they actually show the same individual, inflating population estimates. Worse, camera trap images can get mixed up or misfiled, splitting encounters of one cat across multiple batches and identities.

I am a data analyst working with Wildlife Conservation Society and other partners at Wild Me. My work and others’ has found that even trained experts can misidentify animals, failing to recognize repeat visitors at locations monitored by motion-sensing cameras and counting the same animal more than once. One study found that the snow leopard population was overestimated by more than 30% because of these human errors.

To avoid these pitfalls, researchers follow camera sorting guidelines: At least three clear pattern differences or similarities must be confirmed between two images to declare them the same or different cats. Images too blurry, too dark or taken from difficult angles may have to be discarded. Identification efforts range from easy cases with clear, full-body shots to ambiguous ones needing collaboration and debate. Despite these efforts, variability remains, and more experienced observers tend to be more accurate.

Now people trying to count snow leopards are getting help from artificial intelligence systems, in two ways.

Spotting the spots

Modern AI tools are revolutionizing how we process these large photo libraries. First, AI can rapidly sort through thousands of images, flagging those that contain snow leopards and ignoring irrelevant ones such as those that depict blue sheep, gray-and-white mountain terrain, or shadows.

A snow leopard stands amid rocks.
Unique spots and spot patterns are key to telling snow leopards apart.
Eve Bohnett, CC BY-NC-ND

AI can identify individual snow leopards by analyzing their unique rosette patterns, even when poses or lighting vary. Each snow leopard encounter is compared with a catalog of previously identified photos and assigned a known ID if there is a match, or entered as a new individual if not.

In a recent study, several colleagues and I evaluated two AI algorithms, both separately and in tandem.

The first algorithm, called HotSpotter, identifies individual snow leopards by comparing key visual features such as coat patterns, highlighting distinctive “hot spots” with a yellow marker.

The second is a newer method called pose invariant embeddings, which operates similar to facial recognition technology: It recognizes layers of abstract features in the data, identifying the same animal regardless of how it is positioned in the photo or what kind of lighting there may be.

We trained these systems using a curated dataset of photos of snow leopards from zoos in the U.S., Europe and Tajikistan, and with images from the wild, including in Afghanistan.

Alone, each model worked about 74% of the time, correctly identifying the cat from a large photo library. But when combined, the two systems together were correct 85% of the time.

These algorithms were integrated into Wildbook, an open-source, web-based software platform developed by the nonprofit organization Wild Me and now adopted by ConservationX. We deployed the combined system on a free website, Whiskerbook.org, where researchers can upload images, seek matches using the algorithms, and confirm those matches with side-by-side comparisons. This site is among a growing family of AI-powered wildlife platforms that are helping conservation biologists work more efficiently and more effectively at protecting species and their habitats.

Two images of snow leopards, one in daylight and one in infrared light.
A view from an online wildlife-tracking system suggests a possible match for a snow leopard caught by a remote camera.
Wildbook/Eve Bohnett, CC BY-ND

Humans still needed

These AI systems aren’t error-proof. AI quickly narrows down candidates and flags likely matches, but expert validation ensures accuracy, especially with tricky or ambiguous photos.

Another study we conducted pitted AI-assisted groups of experts and novices against each other. Each was given a set of three to 10 images of 34 known captive snow leopards and asked to use the Whiskerbook platform to identify them. They were also asked to estimate how many individual animals were in the set of photos.

The experts accurately matched about 90% of the images and delivered population estimates within about 3% of the true number. In contrast, the novices identified only 73% of the cats and underestimated the total number, sometimes by 25% or more, incorrectly merging two individuals into one.

Both sets of results were better than when experts or novices did not use any software.

The takeaway is clear: Human expertise remains important, and combining it with AI support leads to the most accurate results. My colleagues and I hope that by using tools like Whiskerbook and the AI systems embedded in them, researchers will be able to more quickly and more confidently study these elusive animals.

With AI tools like Whiskerbook illuminating the mysteries of these mountain ghosts, we have another way to safeguard snow leopards – but success depends on continued commitment to protecting their fragile mountain homes.The Conversation

About the Author:

Eve Bohnett, Assistant Scholar, Center for Landscape Conservation Planning, University of Florida

This article is republished from The Conversation under a Creative Commons license. Read the original article.