Reviews state-of-the-art MLLMs. Highlights the challenge of expanding current models beyond the simple one-to-one image text relationship.Reviews state-of-the-art MLLMs. Highlights the challenge of expanding current models beyond the simple one-to-one image text relationship.

MLLM Adapters: Review of VPGs and Multimodal Fusion

2025/11/13 01:00

Abstract and 1 Introduction

  1. Related Work

    2.1. Multimodal Learning

    2.2. Multiple Instance Learning

  2. Methodology

    3.1. Preliminaries and Notations

    3.2. Relations between Attention-based VPG and MIL

    3.3. MIVPG for Multiple Visual Inputs

    3.4. Unveiling Instance Correlation in MIVPG for Enhanced Multi-instance Scenarios

  3. Experiments and 4.1. General Setup

    4.2. Scenario 1: Samples with Single Image

    4.3. Scenario 2: Samples with Multiple Images, with Each Image as a General Embedding

    4.4. Scenario 3: Samples with Multiple Images, with Each Image Having Multiple Patches to be Considered and 4.5. Case Study

  4. Conclusion and References

\ Supplementary Material

A. Detailed Architecture of QFormer

B. Proof of Proposition

C. More Experiments

2. Related Work

2.1. Multimodal Learning

Recently, various vision-language models (VLMs) have been proposed to enhance the fusion of text and images. For example, TCL [42] employed triplet contrastive learning to simultaneously learn from text and images. Many state-ofthe-art MLLMs have also emerged, with one major distinction lying in the design of VPGs. For instance, FROMAGe [18] and LLaVA [24] employ a straightforward linear projection as their VPGs. On the other hand, Flamingo [2] introduces the novel use of the Perceiver Resampler, incorporating cross attention and learnable query embeddings. BLIP2 [22] innovatively employs the QFormer to improve image-text alignment. Meanwhile, MiniGPT-4 [48] integrates a frozen QFormer with additional learnable layers for enhanced performance.

\ While successful in diverse tasks, current multimodal models are primarily designed under the assumption of a one-to-one relationship between texts and image inputs. In reality, the relationship between text and images can be one-to-many or many-to-many. Effectively applying multimodal models in such scenarios poses an open challenge.

\

:::info Authors:

(1) Wenliang Zhong, The University of Texas at Arlington (wxz9204@mavs.uta.edu);

(2) Wenyi Wu, Amazon (wenyiwu@amazon.com);

(3) Qi Li, Amazon (qlimz@amazon.com);

(4) Rob Barton, Amazon (rab@amazon.com);

(5) Boxin Du, Amazon (boxin@amazon.com);

(6) Shioulin Sam, Amazon (shioulin@amazon.com);

(7) Karim Bouyarmane, Amazon (bouykari@amazon.com);

(8) Ismail Tutar, Amazon (ismailt@amazon.com);

(9) Junzhou Huang, The University of Texas at Arlington (jzhuang@uta.edu).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

BDACS Launches KRW1 Stablecoin Backed by the Won

BDACS Launches KRW1 Stablecoin Backed by the Won

The post BDACS Launches KRW1 Stablecoin Backed by the Won appeared on BitcoinEthereumNews.com. BDACS Launches KRW1 Stablecoin Backed by South Korean Won Custody service provider BDACS has launched KRW1, a new stablecoin pegged 1:1 to the South Korean won (KRW). The regulated custodian focuses on institutional clients and offers services including crypto asset custody and transaction infrastructure supporting multiple blockchains. The KRW1 project recently completed its proof-of-concept (PoC) phase, with the stablecoin launching on the Avalanche blockchain. Each KRW1 token is fully backed by fiat currency, with reserves held at Woori Bank, one of South Korea’s largest financial institutions. Transparency and Platform Features BDACS emphasizes full transparency: holders can monitor reserves in real time via banking API integration, although no dedicated portal is currently available. According to the press release, “The KRW1 launch goes far beyond token issuance. BDACS has developed a comprehensive platform, including issuance and governance systems, as well as a user application supporting peer-to-peer transfers and transaction verification.” The stablecoin is positioned for global use, with potential expansion through new network integrations and collaborations with dollar-pegged stablecoins like USDC and USDT. BDACS also plans to integrate KRW1 into government initiatives, though negotiations or official involvement have not been confirmed. Current Status and Market Outlook KRW1 remains in the concept stage and is not yet publicly traded or available to retail consumers, as South Korea currently lacks a stablecoin framework. However, the launch is reportedly supported by the country’s new president, Lee Je-moon. In related news, Kakao is also reportedly considering a won-pegged stablecoin, highlighting growing interest in this emerging asset class. Source: https://coinpaper.com/11089/bdacs-launches-krw-1-stablecoin-backed-by-the-won
Share
BitcoinEthereumNews2025/09/18 21:28