Knowledge Bases and Multiple Modalities
Virtual Workshop at the 2nd Conference on Automated Knowledge Base Construction (AKBC 2020)
June 25, 2020


Recently, there has been growing interest in combining knowledge bases and multiple modalities such as NLP, vision and speech. These combinations have resulted in improvements to various downstream tasks including question answering, image classification, object detection, and link prediction. The objectives of the KBMM workshop is to bring together researchers interested in (a) combining knowledge bases with other modalities to showcase more effective downstream tasks, (b) improving completion and construction of knowledge bases from multiple modalities, and in general, to share state-of-the-art approaches, best practices, and future directions.


  • Axel Ngonga, Paderborn University
  • Title: Structured Machine Learning for Industrial Machinery
  • Mathias Niepert, NEC Laboratories Europe
  • Title: Towards Multimodal and Explainable Knowledge Graphs
  • Mike Tung, Diffbot
  • Title: The Diffbot Knowledge Graph
  • Chenyan Xiong, Microsoft
  • Title: Representation Learning and Reasoning with Semi-structured Free-Text Knowledge Graph


  • 8:25 AM - 8:30 AM : Opening Remarks
  • 8:30 AM - 9:15 AM : Invited talk: Axel Ngonga, Structured Machine Learning for Industrial Machinery
  • 9:15 AM - 10:00 AM : Invited talk: Chenyan Xiong, Representation Learning and Reasoning with Semi-structured Free-Text Knowledge Graph
  • 10:00 AM - 10:15 AM : Break
  • 10:15 AM - 10:30 AM : Invited student talk: Kenneth Marino, Visual Question Answering Benchmark Requiring External Knowledge
  • 10:30 AM - 11:15 AM : Invited talk: Mathias Niepert, Towards Multimodal and Explainable Knowledge Graphs
  • 11:15 AM - 11:30 AM : Break
  • 11:30 AM - 11:45 AM : Invited student talk: Nitisha Jain, Multimodal Knowledge Graphs for Semantic Analysis of Cultural Heritage Data
  • 11:45 AM - 12:30 PM : Invited talk: Mike Tung, The Diffbot Knowledge Graph
  • 12:30 AM - 12:45 PM : Closing Remarks


    The workshop on Knowledge Bases and Multiple Modalities (KBMM) will consist of contributed posters, and invited talks on a wide variety of methods and problems in this area. We invite extended abstract submissions in the following categories to present at the workshop:

  • Knowledge base completion using multiple modalities
  • Using knowledge bases in NLP tasks
  • Vision and knowledge bases
  • Information extraction
  • Optimization challenges in multimodal scenarios
  • Benchmark datasets and evaluation methods
  • Call for extended abstract

    We invite submission of extended abstracts related to Knowledge Bases and Multiple Modalities (KBMM). Since the workshop is not intended to have a proceeding comprising full versions of the papers, concurrent submissions to other venues, as well accepted work, are allowed provided that concurrent submissions or intention to submit to other venues is declared to all venues including KBMM. Accepted work will be presented as oral during the workshop and listed on this website.

    Reviewing Policy:

    Submissions shall be refereed on the basis of technical quality, potential impact, and clarity. Atleast one of the authors of each accepted submission will be required to present the work virtually.

    Submission instructions

    1). Prepare 1-page abstract.
    2). Please upload your submission in the following Google form (only PDF accepted):
    submission website.
    3). In case of any queries, please drop an email to

    Important dates

    Abstract submission: June 13, 2020, 11:59pm PST.
    Acceptance/rejection notification: June 15, 2020.
    Workshop: June 25, 2020.


    KBMM-2020 will be a fully virtual event. You can find the live event here.


    comments powered by Disqus