MaGGIe excels in hair rendering and instance separation on natural images, outperforming MGM and InstMatt in complex, multi-instance scenarios.MaGGIe excels in hair rendering and instance separation on natural images, outperforming MGM and InstMatt in complex, multi-instance scenarios.

Robust Mask-Guided Matting: Managing Noisy Inputs and Object Versatility

2 min read

Abstract and 1. Introduction

  1. Related Works

  2. MaGGIe

    3.1. Efficient Masked Guided Instance Matting

    3.2. Feature-Matte Temporal Consistency

  3. Instance Matting Datasets

    4.1. Image Instance Matting and 4.2. Video Instance Matting

  4. Experiments

    5.1. Pre-training on image data

    5.2. Training on video data

  5. Discussion and References

\ Supplementary Material

  1. Architecture details

  2. Image matting

    8.1. Dataset generation and preparation

    8.2. Training details

    8.3. Quantitative details

    8.4. More qualitative results on natural images

  3. Video matting

    9.1. Dataset generation

    9.2. Training details

    9.3. Quantitative details

    9.4. More qualitative results

8.4. More qualitative results on natural images

Fig. 13 showcases our model’s performance in challenging scenarios, particularly in accurately rendering hair regions. Our framework consistently outperforms MGM⋆ in detail preservation, especially in complex instance interactions. In comparison with InstMatt, our model exhibits superior instance separation and detail accuracy in ambiguous regions.

\ Fig. 14 and Fig. 15 illustrate the performance of our model and previous works in extreme cases involving multiple instances. While MGM⋆ struggles with noise and accuracy in dense instance scenarios, our model maintains high precision. InstMatt, without additional training data, shows limitations in these complex settings.

\ The robustness of our mask-guided approach is further demonstrated in Fig. 16. Here, we highlight the challenges faced by MGM variants and SparseMat in predicting missing parts in mask inputs, which our model addresses. However, it is important to note that our model is not designed as a human instance segmentation network. As shown in Fig. 17, our framework adheres to the input guidance, ensuring precise alpha matte prediction even with multiple instances in the same mask.

\ Lastly, Fig. 12 and Fig. 11 emphasize our model’s generalization capabilities. The model accurately extracts both human subjects and other objects from backgrounds, showcasing its versatility across various scenarios and object types.

\ All examples are Internet images without ground-truth and the mask from r101fpn400e are used as the guidance.

\ Figure 13. Our model produces highly detailed alpha matte on natural images. Our results show that it is accurate and comparable with previous instance-agnostic and instance-awareness methods without expensive computational costs. Red squares zoom in the detail regions for each instance. (Best viewed in color and digital zoom).

\ Figure 14. Our frameworks precisely separate instances in an extreme case with many instances. While MGM often causes the overlapping between instances and MGM⋆ contains noises, ours produces on-par results with InstMatt trained on the external dataset. Red arrow indicates the errors. (Best viewed in color and digital zoom).

\ Figure 15. Our frameworks precisely separate instances in a single pass. The proposed solution shows comparable results with InstMatt and MGM without running the prediction/refinement five times. Red arrow indicates the errors. (Best viewed in color and digital zoom).

\ Figure 16. Unlike MGM and SparseMat, our model is robust to the input guidance mask. With the attention head, our model produces more stable results to mask inputs without complex refinement between instances like InstMatt. Red arrow indicates the errors. (Best viewed in color and digital zoom).

\ Figure 17. Our solution works correctly with multi-instance mask guidances. When multiple instances exist in one guidance mask, we still produce the correct union alpha matte for those instances. Red arrow indicates the errors or the zoom-in region in red box. (Best viewed in color and digital zoom).

\ Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without retraining.

\ Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without retraining. (Continued)

\ Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without retraining. (Continued)

\ Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without retraining. (Continued)

\ Table 13. The effectiveness of proposed temporal consistency modules on V-HIM60 (Extension of Table 6). The combination of bi-directional Conv-GRU and forward-backward fusion achieves the best overall performance on three test sets. Bold highlights the best for each level.

\

:::info Authors:

(1) Chuong Huynh, University of Maryland, College Park (chuonghm@cs.umd.edu);

(2) Seoung Wug Oh, Adobe Research (seoh,jolee@adobe.com);

(3) Abhinav Shrivastava, University of Maryland, College Park (abhinav@cs.umd.edu);

(4) Joon-Young Lee, Adobe Research (jolee@adobe.com).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
Mask Network Logo
Mask Network Price(MASK)
$0.5382
$0.5382$0.5382
-0.01%
USD
Mask Network (MASK) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

MoneyGram launches stablecoin-powered app in Colombia

MoneyGram launches stablecoin-powered app in Colombia

The post MoneyGram launches stablecoin-powered app in Colombia appeared on BitcoinEthereumNews.com. MoneyGram has launched a new mobile application in Colombia that uses USD-pegged stablecoins to modernize cross-border remittances. According to an announcement on Wednesday, the app allows customers to receive money instantly into a US dollar balance backed by Circle’s USDC stablecoin, which can be stored, spent, or cashed out through MoneyGram’s global retail network. The rollout is designed to address the volatility of local currencies, particularly the Colombian peso. Built on the Stellar blockchain and supported by wallet infrastructure provider Crossmint, the app marks MoneyGram’s most significant move yet to integrate stablecoins into consumer-facing services. Colombia was selected as the first market due to its heavy reliance on inbound remittances—families in the country receive more than 22 times the amount they send abroad, according to Statista. The announcement said future expansions will target other remittance-heavy markets. MoneyGram, which has nearly 500,000 retail locations globally, has experimented with blockchain rails since partnering with the Stellar Development Foundation in 2021. It has since built cash on and off ramps for stablecoins, developed APIs for crypto integration, and incorporated stablecoins into its internal settlement processes. “This launch is the first step toward a world where every person, everywhere, has access to dollar stablecoins,” CEO Anthony Soohoo stated. The company emphasized compliance, citing decades of regulatory experience, though stablecoin oversight remains fluid. The US Congress passed the GENIUS Act earlier this year, establishing a framework for stablecoin regulation, which MoneyGram has pointed to as providing clearer guardrails. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/moneygram-stablecoin-app-colombia
Share
BitcoinEthereumNews2025/09/18 07:04
Solana Treasury Firm Holdings Could Double as Forward Industries Unveils $4 Billion Raise

Solana Treasury Firm Holdings Could Double as Forward Industries Unveils $4 Billion Raise

The post Solana Treasury Firm Holdings Could Double as Forward Industries Unveils $4 Billion Raise appeared on BitcoinEthereumNews.com. In brief Forward Industries, the largest publicly traded Solana treasury company, filed to raise $4 billion through an at-the-market equity offering to expand its SOL holdings. The company’s stock (FORD) fell 8.2% following the announcement, while the proceeds could more than double the $3.1 billion currently held in Solana treasuries. DeFi Development Corp. also registered a preferred stock offering with the SEC, following similar funding tactics used by Bitcoin treasury companies like MicroStrategy. Forward Industries, the newest and largest publicly traded Solana treasury company, has filed to raise $4 billion through an at-the-market equity offering. For the sake of comparison, this $4 billion raise is nearly the same size as Bitcoin treasury Strategy’s Stride preferred stock raise in July. And it’s double the size of the Strife preferred stock offering the company did in May. The proceeds would be used for working capital; pursuit of its Solana token strategy, and “the purchase of income-generating assets to grow its business,” the company said in a press release. Forward Industries declined to comment to Decrypt on what other income-generating assets it’s considering adding to its balance sheet.  As markets opened Wednesday morning, Forward saw its stock price take a dive. The shares, which trade under the FORD ticker on the Nasdaq, dipped to $31.29 before rebounding to $34.28 at the time of writing—marking a 8.2% fall for the session. If the company sells all the shares and spends the bulk of the proceeds on buying Solana, it could more than double the amount of SOL being held in treasuries. At the time of writing, there’s already $3.1 billion in Solana treasuries, according to crypto price aggregator CoinGecko. Users on Myriad, a prediction market owned by Decrypt parent company DASTAN, have been growing more confident that SOL will reach $250 sooner than…
Share
BitcoinEthereumNews2025/09/18 12:43
Microsoft plans to invest $4 billion in building a second AI data center in Wisconsin

Microsoft plans to invest $4 billion in building a second AI data center in Wisconsin

Microsoft will invest $4 billion to build a second AI data center in Wisconsin, bringing its total investment in the region to over $7 billion.
Share
Cryptopolitan2025/09/19 03:05