MaGGIe is an efficient framework for multi-instance human matting using sparse convolution and transformer attention to ensure temporal consistency in videos.MaGGIe is an efficient framework for multi-instance human matting using sparse convolution and transformer attention to ensure temporal consistency in videos.

MaGGIe: Achieving Temporal Consistency in Video Instance Matting

2025/12/17 23:00
5 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Abstract and 1. Introduction

  1. Related Works

  2. MaGGIe

    3.1. Efficient Masked Guided Instance Matting

    3.2. Feature-Matte Temporal Consistency

  3. Instance Matting Datasets

    4.1. Image Instance Matting and 4.2. Video Instance Matting

  4. Experiments

    5.1. Pre-training on image data

    5.2. Training on video data

  5. Discussion and References

\ Supplementary Material

  1. Architecture details

  2. Image matting

    8.1. Dataset generation and preparation

    8.2. Training details

    8.3. Quantitative details

    8.4. More qualitative results on natural images

  3. Video matting

    9.1. Dataset generation

    9.2. Training details

    9.3. Quantitative details

    9.4. More qualitative results

Abstract

Human matting is a foundation task in image and video processing where human foreground pixels are extracted from the input. Prior works either improve the accuracy by additional guidance or improve the temporal consistency of a single instance across frames. We propose a new framework MaGGIe, Masked Guided Gradual Human Instance Matting, which predicts alpha mattes progressively for each human instances while maintaining the computational cost, precision, and consistency. Our method leverages modern architectures, including transformer attention and sparse convolution, to output all instance mattes simultaneously without exploding memory and latency. Although keeping constant inference costs in the multiple-instance scenario, our framework achieves robust and versatile performance on our proposed synthesized benchmarks. With the higher quality image and video matting benchmarks, the novel multi-instance synthesis approach from publicly available sources is introduced to increase the generalization of models in real-world scenarios. Our code and datasets are available at https://maggie-matt.github.io.

1. Introduction

In image matting, a trivial solution is to predict the pixel transparency - alpha matte α ∈ [0, 1] for precise background removal. Considering a saliency image I with two main components, foreground F and background B, the image I is expressed as I = αF + (1 − α)B. Because of the ambiguity in detecting the foreground region, for example, whether a person’s belongings are a part of the human foreground or not, many methods [11, 16, 31, 37] leverage additional guidance, typically trimaps, defining foreground, background, and unknown or transition regions. However, creating trimaps, especially for videos, is resourceintensive. Alternative binary masks [39, 56] are simpler to obtain by human drawings or off-the-shelf segmentation models while offering greater flexibility without hardly con-

\ Figure 1. Our MaGGIe delivers precise and temporally consistent alpha mattes. It adeptly preserves intricate details and demonstrates robustness against noise in instance guidance masks by effectively utilizing information from adjacent frames. Red arrows highlight the areas of detailed zoom-in. (Optimally viewed in color and digital zoom in).

\ straint output values of regions as trimaps. Our work focuses but is not limited to human matting because of the higher number of available academic datasets and user demand in many applications [1, 2, 12, 15, 44] compared to other objects.

\ When working with video input, the problem of creating trimap guidance is often resolved by guidance propagation [17, 45] where the main idea coming from video object segmentation [8, 38]. However, the performance of trimap propagation degrades when video length grows. The failed trimap predictions, which miss some natures like the alignment between foreground-unknown-background regions, lead to incorrect alpha mattes. We observe that using binary masks for each frame gives more robust results. However, the consistency between the frame’s output is still important for any video matting approach. For example, holes appearing in a random frame because of wrong guidance should be corrected by consecutive frames. Many works [17, 32, 34, 45, 53] constrain the temporal consistency at feature maps between frames. Since the alpha matte values are very sensitive, feature-level aggregation is not an absolute guarantee of the problem. Some methods [21, 50] in video segmentation and matting compute the incoherent regions to update values across frames. We propose a temporal consistency module that works in both feature and output spaces to produce consistent alpha mattes.

\

\ Besides the temporal consistency, when extending the instance matting to videos containing a large number of frames and instances, the careful network design to prevent the explosion in the computational cost is also a key challenge. In this work, we propose several adjustments to the popular mask-guided progressive refinement architecture [56]. Firstly, by using the mask guidance embedding inspired by AOT [55], the input size reduces to a constant number of channels. Secondly, with the advancement of transformer attention in various vision tasks [40–42], we inherit the query-based instance segmentation [7, 19, 23] to predict instance mattes in one forward pass instead of separated estimation. It also replaces the complex refinement in previous work with the interaction between instances by attention mechanism. To save the high cost of transformer attention, we only perform multi-instance prediction at the coarse level and adapt the progressive refinement at multiple scales [18, 56]. However, using full convolution for the refinement as previous works are inefficient as less than 10% of values are updated at each scale, which is also mentioned in [50]. The replacement of sparse convolution [36] saves the inference cost significantly, keeping the constant complexity of the algorithm since only interested locations are refined. Nevertheless, the lack of information at a larger scale when using sparse convolution can cause a dominance problem, which leads to the higher-scale prediction copying the lower outputs without adding fine-grained details. We propose an instance guidance method to help the coarser prediction guide but not contribute to the finer alpha matte.

\ In addition to the framework design, we propose a new training video dataset and benchmarks for instance-awareness matting. Besides the new large-scale high-quality synthesized image instance matting, an extension of the current instance image matting benchmark adds more robustness with different guidance quality. For video input, our synthesized training and benchmark are constructed from various public instance-agnostic datasets with three levels of difficulty.

\ In summary, our contributions include:

\ • A highly efficient instance matting framework with mask guidance that has all instances interacting and processed in a single forward pass.

\ • A novel approach that considers feature-matte levels to maintain matte temporal consistency in videos.

\ • Diverse training datasets and robust benchmarks for image and video instance matting that bridge the gap between synthesized and natural cases.

\

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

The post Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO appeared on BitcoinEthereumNews.com. Aave DAO is gearing up for a significant overhaul by shutting down over 50% of underperforming L2 instances. It is also restructuring its governance framework and deploying over $100 million to boost GHO. This could be a pivotal moment that propels Aave back to the forefront of on-chain lending or sparks unprecedented controversy within the DeFi community. Sponsored Sponsored ACI Proposes Shutting Down 50% of L2s The “State of the Union” report by the Aave Chan Initiative (ACI) paints a candid picture. After a turbulent period in the DeFi market and internal challenges, Aave (AAVE) now leads in key metrics: TVL, revenue, market share, and borrowing volume. Aave’s annual revenue of $130 million surpasses the combined cash reserves of its competitors. Tokenomics improvements and the AAVE token buyback program have also contributed to the ecosystem’s growth. Aave global metrics. Source: Aave However, the ACI’s report also highlights several pain points. First, regarding the Layer-2 (L2) strategy. While Aave’s L2 strategy was once a key driver of success, it is no longer fit for purpose. Over half of Aave’s instances on L2s and alt-L1s are not economically viable. Based on year-to-date data, over 86.6% of Aave’s revenue comes from the mainnet, indicating that everything else is a side quest. On this basis, ACI proposes closing underperforming networks. The DAO should invest in key networks with significant differentiators. Second, ACI is pushing for a complete overhaul of the “friendly fork” framework, as most have been unimpressive regarding TVL and revenue. In some cases, attackers have exploited them to Aave’s detriment, as seen with Spark. Sponsored Sponsored “The friendly fork model had a good intention but bad execution where the DAO was too friendly towards these forks, allowing the DAO only little upside,” the report states. Third, the instance model, once a smart…
Share
BitcoinEthereumNews2025/09/18 02:28
DeAgentAI releases new white paper, detailing $AIA token economics and staking model

DeAgentAI releases new white paper, detailing $AIA token economics and staking model

PANews reported on September 18 that the Sui ecological AI project DeAgentAI announced that it has updated its official white paper to version V2. The new white paper primarily adds "token economics" and "staking mechanisms." The token economics section details $AIA's core functions, value capture model, token distribution ratio, and detailed release rules. The staking mechanism section explains $AIA's value and how to stake it. In addition, the white paper also published security audit reports issued by multiple institutions on core components such as token contracts and cross-chain bridges.
Share
PANews2025/09/18 12:05
New Crypto Investors Are Backing Layer Brett Over Dogecoin After Topping The Meme Coin Charts This Month

New Crypto Investors Are Backing Layer Brett Over Dogecoin After Topping The Meme Coin Charts This Month

Climbing to the top of the meme coin charts takes more than a viral mascot or celebrity tweets. Hype may spark attention, but only momentum, utility, and adaptability keep it alive. That’s why the latest debate among crypto enthusiasts is catching attention. While Dogecoin remains a household name, a new player has entered the arena […] The post New Crypto Investors Are Backing Layer Brett Over Dogecoin After Topping The Meme Coin Charts This Month appeared first on Live Bitcoin News.
Share
LiveBitcoinNews2025/09/18 00:30