Ontology-based semantic web image retrieval by utilizing textual and visual annotations

Ja Hwung Su*, Bo Wen Wang, Hsin Ho Yeh, S. Tseng

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

The goal of traditional visual or textual-based image retrieval is to satisfy user's queries by associating the images and semantic concepts effectively. As a result, perceptual structures of images have attracted researchers' attention in recent studies. However, few past studies have been made on achieving semantic image retrieval by using image annotation techniques. To catch user's ontological intention, we propose a new approach, namely Intelligent Web Image FetchER (iWIFER), which simultaneously considers the ontological requirements in usability, intelligence and effectiveness. Based on the proposed visual and textualbased annotation models, the image query becomes easy and effective. Through empirical evaluations, our annotation models can deliver accurate results for semantic web image retrieval.

Original languageEnglish
Title of host publicationProceedings - 2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Workshops, WI-IAT Workshops 2009
Pages425-428
Number of pages4
DOIs
StatePublished - 1 Dec 2009
Event2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Workshops, WI-IAT Workshops 2009 - Milano, Italy
Duration: 15 Sep 200918 Sep 2009

Publication series

NameProceedings - 2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Workshops, WI-IAT Workshops 2009
Volume3

Conference

Conference2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Workshops, WI-IAT Workshops 2009
CountryItaly
CityMilano
Period15/09/0918/09/09

Fingerprint Dive into the research topics of 'Ontology-based semantic web image retrieval by utilizing textual and visual annotations'. Together they form a unique fingerprint.

Cite this