Published on Friday, 26th December 2025
Research in AI makes it possible to significantly accelerate processes, handle massive volumes of information, and reduce false information in LLM outputs. Igor Malovytsia is an AI Researcher at SingularityNET, a company developing the concept of decentralized AI. Igor explained what this mission entails, which technologies can help make it real, and how modern research is applied in practice to optimize AI systems.
SingularityNET is a decentralized artificial intelligence platform built on blockchain technology. Its goal is to create a global marketplace and ecosystem where AI models, agents, and services can freely interact, combine, and be exchanged without a centralized owner. The founder of SingularityNET is Ben Goertzel, one of the most well-known researchers in the field of AGI (Artificial General Intelligence).
Most leading AI companies are focused on making models more powerful and safer, while centrally controlling their development. Company leaders, including Sam Altman, speak about deep automation in the future, which could lead to a redistribution of power and wealth in favor of AI owners. The study “Gradual Disempowerment” emphasizes that even the gradual growth of AI capabilities will reduce human participation in social systems, and therefore their ability to influence the future.
Against this backdrop, SingularityNET offers an alternative: decentralized AGI, where all participants – not just corporate shareholders – gain control over AI and benefit from the technology.
The company is also engaged in research at the intersection of AI and blockchain. The goal of this research is to improve and democratize AI. So far, the results have been strong and are helping drive the company forward.
I was invited to the company to work on a project called MORK, whose main author is Adam Vandervorst. My primary task is to research and improve technologies for graph processing. I work on data storage and loading, as well as MORK’s optimization: I’m improving performance so it can handle much larger volumes of data by storing information on disk. In this way, MORK helps scientists run much larger-scale experiments on graphs.
Furthermore, this solution could help mitigate LLM hallucinations – the generation of plausible but incorrect information. The problem of hallucinations exists even in the most advanced models, and there is still no complete solution. However, there is a theory that risks can be reduced by having LLMs use symbolic and logical reasoning – that is, reasoning based on concrete rules, mathematical formulas, and other well-defined facts.
MORK makes it possible to store and process these formulas, facts, and rules in a graph database and then perform logical transformations on them. We can make the AI use this logic and track exactly how it arrives at an answer. And if there is an error in the answer, it can be found in the logical chain and corrected.
The key reason my candidacy was chosen for the project is my strong expertise in Rust. This programming language is known for its high speed and efficiency, and there are not many experienced specialists working with it. That is why I became part of this ambitious project.
The main technological innovation of MORK is a new graph-processing algorithm based on the scientific paper “Triemaps that Match” by Simon Peyton Jones, Richard Eisenberg, and Sebastian Graf. The essence of this approach lies in using a prefix tree (trie) not just as a data structure, but as a virtual machine that performs computations during tree traversal.
This means that each node in the tree contains not only data, but also processing logic. Computations are performed directly along the path while moving through the trie, and as a result the tree becomes not only a storage structure, but also an executable one. To my knowledge, no other similar systems exist, and this gives us advantages in speed and performance compared to existing alternatives.
Before MORK, there were implementations of “Triemaps that Match” in other languages, but their performance was insufficient. After the critical part of the algorithm was rewritten in Rust, performance increased by approximately sixfold.
Within the project, I worked with huge datasets that did not fit into RAM. I created a disk-based storage format for tries called Arena Compact Trie (ACT). Put simply, a conventional tree stores each branch separately in memory, and a lot of space is spent on auxiliary data. If you move the tree to disk, you can store data more compactly: reduce the amount of memory used for pointers, eliminate duplicates, and pack data more densely. This significantly reduces data volume and RAM requirements.
First and foremost, we achieved high performance in MORK thanks to the transition to Rust. Previously, the project’s authors used Scala, but they were not satisfied with its memory usage and lack of opportunities for parallelism . After studying various programming languages, they decided to choose Rust — its type system is as rich as Scala’s, and it also enables low-level optimizations that helped us reach the required speed.
One of the main ideas I would highlight is full control over memory in the application. For databases of this size, every byte matters – and with languages like Python or JavaScript, we wouldn’t have direct control over memory usage. Rust allows you to fully understand what is happening with memory and prevents typical memory errors, making it possible to optimize code safely.
The second idea is the use of memory mapping, which allows you to work with data on disk as if it were in memory (RAM). Because the graph-building process writes data from memory to disk, we can open the file almost instantly without reading it in full. This approach is often used in databases and is very appropriate for MORK.
The most difficult part for me is reading complex formulas in scientific journals and translating them into a language that the “hardware” understands. I often see excellent algorithms in academic publications that are never applied in practice because they are difficult to understand and adapt to real-world tasks, and academic authors usually do not have that goal. That is why I turn to journals to find unconventional ways of solving problems.
What brings me the most satisfaction is seeing people solve real-world problems using my software. And as an architect, I enjoy it when blocks that were carefully thought through and developed separately fit together seamlessly. Thinking through how exactly to connect these blocks and divide up their responsibilities is a highly labor-intensive process that requires a lot of trial and error. And when everything works out, it is truly rewarding.
As I understand it, the graph database is embedded in SingularityNET’s broader path toward AGI, including symbolic computation. Could you elaborate on this?
SingularityNET is pursuing several AGI-related directions. One of the most important is infrastructure development. SingularityNET has invested significant effort in this area over the past four years, and I can say that 2026 will demonstrate the results.
As for our product MORK, it has become a breakthrough solution. I’d also mention that it already has a very wide range of applications, from spatial indices to hypergraph storage, to PL/interpreter development, and to parallel data processing. Moreover, I do not rule out the possibility that our research will yield other important insights.
In my opinion, most breakthrough discoveries were not made intentionally, but rather emerged as byproducts of other research. One can recall, for example, the invention of penicillin. Therefore, it is entirely possible that as we work on MORK, we’ll uncover other non-obvious applications for it.
First of all, MORK stands out for its performance and data scale. Compared to competitors, our system is at least three times faster. One of my favorite stories is from a conversation where our team compared algorithmic approaches with competitors, and they simply didn’t realize it was possible to achieve such low asymptotic complexity for these kinds of tasks. As a result, MORK will significantly accelerate research.
By the way, often, new algorithms turn quantitative improvements into qualitative breakthroughs. Reducing the asymptotic complexity of an algorithm is a quantitative change, but its effect in real-world work in practice is so enormous that it becomes qualitative: tasks that were previously computationally impossible because they would have taken millions of years of computation become feasible in seconds.
I see three main directions for the development of MORK. The first is detailed documentation with various usage examples. It is important for us that the entry barrier to MORK is as low as possible, and that the obvious benefits outweigh the effort required to use the product.
The second is expanding data sources and application domains. We have seen MORK/PathMap applied to solving problems in mathematics, genomics, and blockchain. And we will strive to broaden the range of use cases for the tool.
The third direction is giving users more control over data transformations. Today, the language for working with MORK is quite limited, but we are working on this as well.
As for future systems, one of the main goals is large-scale scientific computing. For example, one of the questions in our research is the application of MORK to AGI research. What can be said with confidence is that future AI models will work much better with symbolic reasoning, and the use of MORK is one of the opportunities already available today.


