Apple's HEIC (High-Efficiency Image Container) is great for saving space, but not so great for compatibility. Many APIs and libraries are optimized for older, more universal formats like JPEG. The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. This little script was the key that unlocked the entire project.Apple's HEIC (High-Efficiency Image Container) is great for saving space, but not so great for compatibility. Many APIs and libraries are optimized for older, more universal formats like JPEG. The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. This little script was the key that unlocked the entire project.

From 50 Pages of Handwritten Notes to a Digital Manuscript with Python and AI

We’ve all got them. The notebooks filled with scribbled ideas, the half-finished projects, the “someday” repositories gathering digital dust. For three years, my “someday” project was a 50-page, handwritten draft of a novel. It was a tangible thing, a stack of paper in a box, but the activation energy required to turn it into a working digital manuscript always seemed just out of reach.

Then, life threw a serious curveball a health scare that came with a flurry of heavy, clinical words. I won’t dwell on the details, but it became a powerful, personal forcing function. The concept of "someday" was suddenly replaced with the urgency of "right now." The project was no longer a hobby; it was a mission.

It was time to digitize. My plan was simple: take photos of each page with my iPhone and feed them into a modern AI with vision capabilities to transcribe the text. What could be easier?

The First Roadblock: Apple’s HEIC Problem

As any developer knows, the gap between a simple plan and a working execution is where the real work happens. I quickly took high-resolution photos of all 50 pages, but when I tried to upload them, I hit an immediate wall.

The native iOS camera format, HEIC (High-Efficiency Image Container), is great for saving space. It’s not so great for compatibility. Many APIs and libraries, including some of the most powerful vision models, are optimized for older, more universal formats like JPEG.

My seamless AI pipeline was blocked at the first step. Manually converting 50+ images was a non-starter. This wasn't a time for tedious tasks; this was a time for building. So, I did what any developer does when faced with a repetitive, boring problem: I wrote a script to fix it.

The Python Script That Unlocked Everything

The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. In this case, Pillow (the friendly fork of PIL) and the pillow-heif library were the perfect tools for the job.

The goal was simple: point a script at a folder of .heic files and have it spit out high-quality JPEGs in another folder. This little script was the key that unlocked the entire project.

# A simple, effective script to batch convert HEIC files to JPEG from PIL import Image import pillow_heif import os # --- Configuration --- # The folder where my iPhone photos were stored image_folder_path = '/home/j/Desktop/book_notes' # The destination for the converted files converted_folder_path = '/home/j/Desktop/book_notes/converted' # --- End Configuration --- # Create the destination folder if it doesn't exist os.makedirs(converted_folder_path, exist_ok=True) print('start the process yo') try: # A clean one-liner to find all .heic files, case-insensitively get_the_files = [f for f in os.listdir(image_folder_path) if f.lower().endswith('.heic')] print(f"Found {len(get_the_files)} this many yo") for filename in get_the_files: print(f"Processing: {filename}") # Construct the full path to the source file _path = os.path.join(image_folder_path, filename) # Create the new JPEG filename jpeg_filename = os.path.splitext(filename)[0] + '.jpg' jpeg_path = os.path.join(converted_folder_path, jpeg_filename) print(f"Converting {filename} -> {jpeg_filename}...") # Read the HEIF file heif_file = pillow_heif.read_heif(_path) # Create a Pillow Image from the data image = Image.frombytes( heif_file.mode, heif_file.size, heif_file.data, 'raw', ) # Save the image as a JPEG with high quality image.save(jpeg_path, "JPEG", quality=95) except Exception as e: print(f"An error occurred: {e}") print('you be done yo!')

This script worked flawlessly. In a matter of seconds, my incompatible photo library became a clean, ordered set of JPEGs, ready for the AI.

The Real Surprise: AI as a Story Editor

With the conversion done, I batch-uploaded the JPEGs to a vision-enabled LLM. This is where the true magic of modern AI became apparent.

Here’s the thing: in my haste, I hadn’t uploaded the images in the correct order. Page 1 might have been followed by page 15, then page 3. I was expecting to get back a jumble of transcribed text that I would have to painstakingly reassemble.

What I got back was astonishing.

The AI didn't just perform Optical Character Recognition (OCR). It understood the context. It recognized page numbers, chapter headings, and the narrative flow of the text. It not only transcribed the handwriting with incredible accuracy but also re-ordered the disparate image inputs into a perfectly sequential document.

This is a monumental leap from the transcription tools of just a few years ago. We've moved from simple character recognition to contextual understanding. The AI wasn't just a typist; it was acting as a developmental editor.

From Raw Text to a Fine-Tuned Model: The Road Ahead

This initial transcription is the 80/20 solution. It gets me 80% of the way there with 20% of the effort. But it’s just the beginning. My forcing function has not only pushed me to start this project but to think about the entire pipeline from end to end.

Here’s my raw project plan from my notes—the real road map for turning this into a serious, long-term asset.

# PROJECT ROADMAP # 1. Convert Images (DONE) # - Python script handles the HEIC -> JPEG bottleneck. # 2. Load to Database # - Store the raw text and corrected versions for training. # 3. Run Basic LLM for 80/20 (DONE) # - Get the initial transcription. # 4. Make Corrections # - Manually review and correct the AI's output to create a "golden dataset." # 5. Load to Fine-Tune LLM # - Use the corrected text to fine-tune a model specifically on my handwriting and narrative style. # - Infrastructure thought: A Digital Ocean droplet or similar cloud VM with a 16-32GB GPU should be sufficient for this. Need to price this out. # 6. Train # - Run the fine-tuning process. Train multiple versions and compare results. # 7. Test # - Feed the fine-tuned model new handwritten pages and test its accuracy against the base model.

\n Conclusion

A personal crisis can be a powerful lens, clarifying what’s truly important. For me, it was the catalyst to finally stop thinking about a project and start building it. But the journey also revealed how incredibly advanced and accessible the tools at our disposal have become.

A simple Python script solved a frustrating compatibility issue. A modern LLM did more than just transcribe; it understood narrative structure. And the path forward to building a custom-trained model on my own data is no longer the exclusive domain of large tech companies. It's a tangible, achievable project for any developer with a clear goal.

You don't need to wait for a crisis to create your own forcing function. Find that project you've been putting off, identify the first technical hurdle, and write the script that gets you past it. The tools are here. The technology is ready. You just have to start.

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Perpetual Preferred Stock: Strive’s Strategic Masterstroke to Fortify Financial Foundations

Perpetual Preferred Stock: Strive’s Strategic Masterstroke to Fortify Financial Foundations

BitcoinWorld Perpetual Preferred Stock: Strive’s Strategic Masterstroke to Fortify Financial Foundations In a bold move reshaping corporate finance strategy within
Share
bitcoinworld2026/01/26 06:40
How to Trade NFTs in 2026: A Step-by-Step Guide for Beginners

How to Trade NFTs in 2026: A Step-by-Step Guide for Beginners

NFT trading continues to attract new buyers in 2026, even as prices shift fast and trends change overnight. Traders now track collections the same way they track
Share
Coinstats2026/01/26 05:27
Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

The post Polygon Tops RWA Rankings With $1.1B in Tokenized Assets appeared on BitcoinEthereumNews.com. Key Notes A new report from Dune and RWA.xyz highlights Polygon’s role in the growing RWA sector. Polygon PoS currently holds $1.13 billion in RWA Total Value Locked (TVL) across 269 assets. The network holds a 62% market share of tokenized global bonds, driven by European money market funds. The Polygon POL $0.25 24h volatility: 1.4% Market cap: $2.64 B Vol. 24h: $106.17 M network is securing a significant position in the rapidly growing tokenization space, now holding over $1.13 billion in total value locked (TVL) from Real World Assets (RWAs). This development comes as the network continues to evolve, recently deploying its major “Rio” upgrade on the Amoy testnet to enhance future scaling capabilities. This information comes from a new joint report on the state of the RWA market published on Sept. 17 by blockchain analytics firm Dune and data platform RWA.xyz. The focus on RWAs is intensifying across the industry, coinciding with events like the ongoing Real-World Asset Summit in New York. Sandeep Nailwal, CEO of the Polygon Foundation, highlighted the findings via a post on X, noting that the TVL is spread across 269 assets and 2,900 holders on the Polygon PoS chain. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 Key Trends From the 2025 RWA Report The joint publication, titled “RWA REPORT 2025,” offers a comprehensive look into the tokenized asset landscape, which it states has grown 224% since the start of 2024. The report identifies several key trends driving this expansion. According to…
Share
BitcoinEthereumNews2025/09/18 00:40