Recently, there has been growing interest in combining knowledge bases and multiple modalities such as NLP, vision and speech. These combinations have resulted in improvements to various downstream tasks including question answering, image classification, object detection, and link prediction. The objectives of the KBMM workshop is to bring together researchers interested in (a) combining knowledge bases with other modalities to showcase more effective downstream tasks, (b) improving completion and construction of knowledge bases from multiple modalities, and in general, to share state-of-the-art approaches, best practices, and future directions.
|
|
|
|
|
|
|
|
The workshop on Knowledge Bases and Multiple Modalities (KBMM) will consist of contributed posters, and invited talks on a wide variety of methods and problems in this area. We invite extended abstract submissions in the following categories to present at the workshop:
We invite submission of extended abstracts related to Knowledge Bases and Multiple Modalities (KBMM). Since the workshop is not intended to have a proceeding comprising full versions of the papers, concurrent submissions to other venues, as well accepted work, are allowed provided that concurrent submissions or intention to submit to other venues is declared to all venues including KBMM. Accepted work will be presented as oral during the workshop and listed on this website.
Submissions shall be refereed on the basis of technical quality, potential impact, and clarity. Atleast one of the authors of each accepted submission will be required to present the work virtually.
1). Prepare 1-page abstract.
2). Please upload your submission in the following Google form (only PDF accepted):
submission website.
3). In case of any queries, please drop an email to pezeshkp@uci.edu
KBMM-2020 will be a fully virtual event. You can find the live event here.