ARTIFICIAL INTELLIGENCE. AI letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023.ARTIFICIAL INTELLIGENCE. AI letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023.

Generative AI in 2025: From Ghibli memes to low-cost therapy and beyond

2025/12/09 16:00

In 2025, artificial intelligence reached a point of relative normalcy in usage. Due to its ease of use, generative AI (GenAI) — that is, AI that applies machine learning techniques to data it has access to so as to output content of varying sorts — is at the forefront of what people are taking advantage of.

Right now, GenAI has been used in a staggering number of fields and contexts and for various reasons. Let’s try to pin these down.

For amusement and meme-making

One of GenAI’s common uses in 2025 was as a source of amusement or entertainment. 

People took advantage of the power of GenAI to make memes and other images that mimicked various art styles, such as the style of Studio Ghibli. 

Tools meant for making GenAI-enabled videos also came out in droves. These included Sora, Veo, and Firefly, among a host of others.

These memes were a source of fun on one hand, and also a way of helping people take stock of the world in a humorous way as well.

Must Read

[DECODED] How absurd memes help Filipinos cope with crisis and chaos

That said, there was also significant pushback from artists of all stripes, calling for protections for their works to not become AI data or licensing agreements that would allow them proper compensation for their work.

GenAI as musician

GenAI was also used, perhaps in part or in whole, to make AI “music” that was “virtually undetectable” or at the very least very difficult to distinguish from human-made music.

Human artists were pushing back against this. Music insiders are calling for guidelines to identify AI-made music on streaming services, and AI artists have already made their way into the Original Philippine Music (OPM) space. 

Musicians who put their work out there for people to enjoy are not having it. As “Sandali” hitmaker MRLD said in a GMA report, “Ubos braincells, lahat ng pagpupuyat, at wala [nang] page sa notebook na sinusulatan ng kanta para lang matalo ng isang robot na iniiba lang ang genre ng mismong kanta.”

(All those empty brain cells, late-night sessions, and lyric-filled notebooks only to lose to a robot that just changes the genre of a song.)

Whether these AI tracks went onto streaming services as a money-making endeavor or for fun, it stands to reason that these tracks cribbed off existing musicians’ styles and likely skirted or ignored copyright in some shape or form. 

According to a Deezer-Ipsos study, 73% of respondents supported disclosure when AI-generated tracks are recommended, 45% sought filtering options, and 40% said they would skip AI-generated songs entirely. Meanwhile, around 71% expressed surprise at their inability to distinguish between human-made and synthetic tracks.

The political sphere

Generative AI also made it into the political sphere. Usually combined with coordinated inauthentic behavior, generative AI outputs were used to simulate smart responses on Philippine political issues.

Must Read

PH’s 2025 in tech: Top 6 in ChatGPT use, highest scam rate, internet initiatives

A June report by OpenAI, for one, mentioned how it had banned ChatGPT accounts using its models to generate bulk volumes of short comments in English and Taglish that were meant to be posted on politics and current events topics on TikTok and Facebook. Rappler’s report mentioned that the comments “were focused on praising Marcos and/or criticizing his erstwhile ally, Vice President Sara Duterte.”

OpenAI added that “this activity was connected to Comm&Sense Inc, a commercial marketing company in the Philippines.”

On the more insidious side of things, people also used GenAI to make Rodrigo Duterte statues pop up everywhere, or to make AI videos opposing Vice President Sara Duterte’s impeachment. An apparent attempt to appeal to the Duterte “cult of personality,” these AI images were political disinformation disguised as entertainment.

Must Read

[DECODED] What makes AI-generated Duterte statues so popular online?

Further, GenAI images of politicians were shared alongside false claims about the former UniTeam of President Ferdinand Marcos Jr. and supporters of former vice president Leni Robredo, called the Kakampinks, banding together to form “UniPink.”

As a therapist and friend

The Philippines’ growing use of GenAI tools like ChatGPT also underscored how, despite wanting to support specific ends, people also used it as a means of finding affordable or free mental health assistance, or to serve as a surrogate social connection they could count on.

The World Health Organization Commission on Social Connection’s global report revealed one in six people worldwide was affected by loneliness, with significant impacts on health and well-being.

In GenAI’s case, chatbots served as non-judgmental connections people could turn to for solace, or as a means of finding further help for how they’re feeling or thinking

Must Read

Where AI falls short: The limits of artificial emotional support

Trigger warning: AI-enabled therapy is not without its risks. AI doesn’t push back against a person’s ideas. It has led some to warn about AI as a therapy tool because it may have driven teens to not seek help from their parents and to turn to suicide as a solution.

(The Department of Health has national crisis hotlines to assist people with mental health concerns: 1553 (landline), 0966-351-4518, and 0917-899-USAP (8727) (Globe/TM); and 0908-639-2672 (Smart/Sun/TNT).)

Bridging or creating AI gaps

As artificial intelligence iterates and improves, it also becomes more important to recognize the ever-present digital divide that might be made worse by increased AI adoption, even as we enter an age of ever-increasing AI slop, as dictionaries might call it.

Not only will people have to be technologically proficient, they also have to now be mindful of how AI shapes reality at present.

Must Read

AI could increase divide between rich and poor states, UN report warns

Otherwise, there is a risk of increasing inequality between rich and poor states the further along in the AI cycle we go.

What started out as a marketing term to secure funding for research into a given tech field in the 1950s and became a point of entertainment at present is also a cause for concern as it upends work and play. 

And yet, as we deal with the repercussions of generative and other forms of AI as they appear, further AI literacy is needed to bridge the gaps. 

Governments should look to enact guardrails to protect against AI use by bad actors, and civil society should also work in tandem to inform the public of the potentials and problems posed by GenAI and all its other emerging forms. – Rappler.com

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Crypto whale loses $6M to sneaky phishing scheme targeting staked Ethereum

Crypto whale loses $6M to sneaky phishing scheme targeting staked Ethereum

The post Crypto whale loses $6M to sneaky phishing scheme targeting staked Ethereum appeared on BitcoinEthereumNews.com. A crypto whale lost more than $6 million in staked Ethereum (stETH) and Aave-wrapped Bitcoin (aEthWBTC) after approving malicious signatures in a phishing scheme on Sept. 18, according to blockchain security firm Scam Sniffer. According to the firm, the attackers disguised their move as a routine wallet confirmation through “Permit” signatures, which tricked the victim into authorizing fund transfers without triggering obvious red flags. Yu Xian, founder of blockchain security company SlowMist, noted that the victim did not recognize the danger because the transaction required no gas fees. He wrote: “From the victim’s perspective, he just clicked a few times to confirm the wallet’s pop-up signature requests, didn’t spend a single penny of gas, and $6.28 million was gone.” How Permit exploits work Permit approvals were originally designed to simplify token transfers. Instead of submitting an on-chain approval and paying fees, a user can sign an off-chain message authorizing a spender. That efficiency, however, has created a new attack surface for malicious players. Once a user signs such a permit, attackers can combine two functions—Permit and TransferFrom—to drain assets directly. Because the authorization takes place off-chain, wallet dashboards show no unusual activity until the funds move. As a result, the assets are gone when the approval executes on-chain, and tokens are redirected to the attacker’s wallet. This loophole has made permit exploits increasingly attractive for malicious actors, who can siphon millions without needing complex hacks or high-cost gas wars. Phishing losses The latest theft highlights a wider trend of escalating phishing campaigns. Scam Sniffer reported that in August alone, attackers stole $12.17 million from more than 15,200 victims. That figure represented a 72% jump in losses compared with July. According to the firm, the most significant share of August’s damages came from three large accounts that accounted for nearly half…
Share
BitcoinEthereumNews2025/09/19 02:31