You are using an outdated browser. Please upgrade your browser .

T4Tutorials.com

Semantic web research topics for ms phd, semantic web research topic ideas for ms, or ph.d. degree.

I am sharing with you some of the research topics regarding Semantic Web that you can choose for your research proposal for the thesis work of MS, or Ph.D. Degree.

  • Representing construction-related geometry in a semantic web context: A review of approaches
  • A review of the semantic web field
  • ” Sampo” Model and Semantic Portals for Digital Humanities on the Semantic Web.
  • A systematic literature review on semantic web enabled software testing
  • Semantic web technologies for the internet of things: Systematic literature review
  • Enhancing the functionality of augmented reality using deep learning, semantic web and knowledge graphs: A review
  • BIM and semantic web-based maintenance information for existing buildings
  • Effective information retrieval and feature minimization technique for semantic web data
  • Servicing your requirements: An fca and rca-driven approach for semantic web services composition
  • Towards a New Scalable Big Data System Semantic Web Applied on Mobile Learning.
  • Semantic Web: A Review Of The Field
  • An overview of massive open online course platforms: personalization and semantic web technologies and standards
  • SeMantic AnsweR Type prediction task (SMART) at ISWC 2020 Semantic Web Challenge
  • Leveraging cloud computing for the semantic web: review and trends
  • Semantic web service composition using semantic similarity measures and formal concept analysis
  • A Dynamic Dashboarding Application for Fleet Monitoring Using Semantic Web of Things Technologies
  • Knowledge representation with ontologies and semantic web technologies to promote augmented and artificial intelligence in systems engineering
  • Semantic web technologies applied to software accessibility evaluation: a systematic literature review
  • Protein ontology on the semantic web for knowledge discovery
  • Ontology-based knowledge representation for industrial megaprojects analytics using linked data and the semantic web
  • Interpretation and automatic integration of geospatial data into the semantic web
  • SETLBI: An Integrated Platform for Semantic Business Intelligence
  • Differentially Private Stream Processing for the Semantic Web
  • Software as a service, Semantic Web, and big data: Theories and applications
  • Accessing Provenance Records in Semantic Web Services
  • Sustainable multi-layered open data processing model for agriculture: IoT based case study using semantic web for hazelnut fields
  • Semantic Web Applications: Current Trends in Datasets, Tools and Technologies’ Development for Linked Open Data
  • Graphical and collaborative annotation support for semantic Web services
  • X3D Ontology for Querying 3D Models on the Semantic Web
  • Migrating a complex classification scheme to the semantic web: expressing the Integrative Levels Classification using SKOS RDF
  • Semantic SPA framework for situational student-project allocation in education
  • Towards Using Semantic-Web Technologies for Multi-Modal Knowledge Graph Construction
  • MWPD2020: Semantic Web challenge on Mining the Web of HTML-embedded product data
  • Scan-to-graph: Semantic enrichment of existing building geometry
  • Towards knowledge-based geovisualisation using Semantic Web technologies: A knowledge representation approach coupling ontologies and rules
  • SchemaDecrypt++: Parallel on-line Versioned Schema Inference for Large Semantic Web Data sources
  • Multi-Intentional Description of Learning Semantic Web Services
  • Architecture for semantic web service composition in spatial data infrastructures
  • Machine learning for the semantic web: Lessons learnt and next research directions
  • Towards the semantic formalization of science
  • Neural-symbolic integration and the semantic web
  • Lifting preferences to the semantic web: PreferenceSPARQL
  • Re-sculpturing Semantic Web of Things as a Strategy for Internet of Things’ Intrinsic Contradiction
  • Using the Semantic Web in Academic Websites
  • SEMDPA: A semantic Web crossroad architecture for WSNs in the Internet of Things
  • Bridging the Technology Gap Between Industry and Semantic Web: Generating Databases and Server Code From RDF
  • Using the Semantic Web in Digital Humanities: Shift from data publishing to data-analysis and serendipitous knowledge discovery
  • A semantic web methodological framework to evaluate the support of integrity in thesaurus tools
  • Querying the Semantic Web via Rules
  • Real-Time Semantic Web Data Stream Processing Using Storm
  • Determinants of semantic web technology adoption from IT professionals’ perspective: Industry competition, organization innovativeness, and data management …
  • From the web of bibliographic data to the web of bibliographic meaning: structuring, interlinking and validating ontologies on the semantic web
  • Semantic Web Environments for Multi-Agent Systems: Enabling agents to use Web of Things via semantic web
  • Semantic web design of rfid pharmaceutical drugs analytics and clinics tracking system
  • Semantic Oriented Data Modeling for Enterprise Application Engineering Using Semantic Web Languages
  • Using IoT and Semantic Web Technologies for Healthcare and Medical Sector
  • The missing link–A semantic web based approach for integrating screencasts with security advisories
  • STILTool: a semantic table interpretation evaluation tool
  • A State of the Art on Big Data With Semantic Web Technologies
  • Semantic Web Application and Framework Development in South African Higher Education Institutions
  • A hybridized semantic trust-based framework for personalized web page recommendation
  • A Compositional Semantics for a Wide-Coverage Natural-Language Query Interface to a Semantic Web Triplestore
  • Flexible data partitioning schemes for parallel merge joins in semantic web queries
  • The Integration of OGC SensorThings API and OGC CityGML via Semantic Web Technology
  • A decade of Semantic Web research through the lenses of a mixed methods approach
  • Supporting Online Teaching Laboratories with Semantic Web
  • A Tool for Transforming Semantic Web Rule Language to SPARQL Infererecing Notation
  • E‐maintenance platform design for public infrastructure maintenance based on IFC ontology and Semantic Web services
  • A Conceptual Model of Indonesian Question Answering System based on Semantic Web
  • Semantic web for information exchange between the building and manufacturing industries: a literature review
  • Semantic Web and Data Visualization
  • Semantic communications between distributed cyber-physical systems towards collaborative automation for smart manufacturing
  • The Semantic Web: ESWC 2020 Satellite Events
  • Toward automatic web service composition based on multilevel workflow orchestration and semantic web service discovery
  • Scalable, Efficient and Precise Natural Language Processing in the Semantic Web
  • Fostering an awesome tool ecosystem for the semantic web
  • Personalized Recommendation of Learning Objects Through Bio-inspired Algorithms and Semantic Web Technologies: an Experimental Analysis
  • The Importance of Researching and Developing the Semantic Web of Things
  • Semantic Web Ontology for Vocational Education Self-Evaluation System
  • Information extraction meets the semantic web: a survey
  • A comprehensive survey on semantic interoperability for Internet of Things: State‐of‐the‐art and research challenges
  • LIMES: A Framework for Link Discovery on the Semantic Web
  • Using Multi-Agent Microservices for a Better Dynamic Composition of Semantic Web services
  • Representing Activities associated with Processing of Personal Data and Consent using Semantic Web for GDPR Compliance
  • Decentralized Control and Adaptation in Distributed Applications Via Web and Semantic Web Technologies
  • Semantic Web-Based Blueprint for Digital Healthcare in A Resource Constrained Environment: Towards A Connected Healthcare Ecosystem
  • Approaches for Efficient Query Optimization Using Semantic Web Technologies
  • Invited keynote on IOT4SAFE 2020: Semantic Web Technologies in Fighting Crime and Terrorism: The CONNEXIONs Approach
  • Blockchain applications in lifelong learning and the role of the semantic blockchain
  • Recent advances in Web3D semantic modeling
  • A Survey Paper on Effective Query Processing for Semantic Web Data using Hadoop Components
  • Analysis and Summarization of Related Blog Entries Using Semantic Web
  • Twitter Fake Account Detection and Classification using Ontological Engineering and Semantic Web Rule Language
  • Towards a question answering system over the semantic web
  • Linkingpark: An integrated approach for semantic table interpretation
  • Transitions in Journalism—Toward a Semantic-Oriented Technological Framework
  • Generation Blockchain: Youth in the age of the semantic web
  • The Semantic Web in the Internet of Production: A Strategic Approach with Use-Case Examples
  • Documentation, Processing, and Representation of Architectural Heritage through 3D Semantic Modelling: The INCEPTION Project
  • Publishing and Using Legislation and Case Law as Linked Open Data on the Semantic Web
  • USING ARTIFICIAL INTELLIGENCE AND SEMANTIC WEB TECHNOLOGIES INSIDE CYBERDEFENSE SYSTEMS
  • Using semantic markup to boost context awareness for assistive systems
  • Understanding distributed data-a semantic web approach for data based analysis of NDT data in civil engineering
  • Semantic Modeling of Virtual Reality Training Scenarios
  • Semantic data models for hiking trail difficulty assessment
  • G-OWL: Towards Graphical-Ontology-Web-Language-an OWL-2 Visual Notation for the Semantic Web Ontology Modeling
  • A Semantic Web Solution for Enhancing the Interoperability of E-Learning Systems by Using Next Generation of SCORM Specifications
  • A novel framework and concept-based semantic search Interface for abnormal crowd behaviour analysis in surveillance videos
  • Ontologies as a semantic model in IoT
  • Neural language models for the multilingual, transcultural, and multimodal Semantic Web
  • Large-scale distributed semantic augmented reality services–A performance evaluation
  • Open Challenges for the Management and Preservation of Evolving Data on the Web
  • Semantic integration of Bosch manufacturing data using virtual knowledge graphs
  • Frame Logic‐based specification and discovery of semantic web services with application to medical appointments
  • Sasha: Semantic-aware shilling attacks on recommender systems exploiting knowledge graphs
  • A Semantic Approach for Entity Linking by Diverse Knowledge Integration incorporating Role-Based Chunking
  • Enhancing Online Knowledge Graph Population with Semantic Knowledge
  • LBD server: Visualising Building Graphs in web-based environments using semantic graphs and glTF-models
  • Towards a Semantic Layer Design for an Advanced Intelligent Multimodal Transportation System
  • Experience of the semantic technologies use for intelligent Web encyclopedia creation (on example of the Great Ukrainian Encyclopedia portal)
  • Towards a new generation of ontology based data access
  • Intellibvr-intelligent large-scale video retrieval for objects and events utilizing distributed deep-learning and semantic approaches
  • SEMANTIC WEB UNTUK TRACER STUDY UNIVERSITAS HARAPAN MEDAN
  • Knowledge Graphs on the Web-An Overview.
  • The semantic web: two decades on
  • Ontology-based semantic annotation of Xenophon’s Hellenica.
  • User content categorisation model, a generic model that combines text mining and semantic models
  • Semantic-based early warning system for equipment maintenance
  • Special issue on Semantic eScience: Methods, tools and applications
  • A social-semantic recommender system for advertisements
  • OntoBestFit: A Best-Fit Occurrence Estimation strategy for RDF driven faceted semantic search
  • Leveraging semantic parsing for relation linking over knowledge bases
  • A Cloud Computing Capability Model for Large-Scale Semantic Annotation
  • Ambient Learning Spaces: Discover, Explore and Understand Semantic Correlations
  • On the role of knowledge graphs in explainable AI
  • Detecting different forms of semantic shift in word embeddings via paradigmatic and syntagmatic association changes
  • A visual modeling approach for the semantic web rule language
  • Semantic Web reasoning
  • Pemantauan Kualitas Udara Terintegrasi dengan Semantic Web Of Thing
  • Semantic Based Vitamin Deficiency Monitoring System
  • Research on the construction of the semantic model for Chinese ancient architectures based on architectural narratives
  • Rule-Based Semantic Validation for Standardized Linked Building Models
  • ORFFM: An Ontology-Based Semantic Model of River Flow and Flood Mitigation
  • Parallel-Based Techniques for Managing and Analyzing the Performance on Semantic Graph
  • Efficient Weighted Semantic Score Based on the Huffman Coding Algorithm and Knowledge Bases for Word Sequences Embedding
  • Semantic-based technologies for video analysis in activity recognition, video surveillance and smart home domains
  • An approach for semantic-based searching in learning resources
  • Semantics for Cyber-Physical Systems: A cross-domain perspective
  • PERANCANGAN DATABASE DOSEN BERBASIS SEMANTIC WEB
  • Web search personalization using semantic similarity measure
  • A Semantic data model to represent building material data in AEC collaborative workflows
  • Ontology engineering: Current state, challenges, and future directions
  • New Clustering-Based Semantic Service Selection and User Preferential Model
  • Applying semantic role labeling and spreading activation techniques for semantic information retrieval
  • Towards a semantic model for IoT-based seismic event detection and classification
  • Information credibility in the social web: Contexts, approaches, and open issues
  • Ontology Opportunities and Challenges: Discussions from Semantic Data Integration Perspectives
  • Schímatos: A SHACL-Based Web-Form Generator for Knowledge Graph Editing
  • Publishing CSV Data as Linked Data on the Web
  • A fully automated approach to a complete semantic table interpretation
  • WebNLG 2020 Challenge: Semantic Template Mining for Generating References from RDF
  • Linked Open Biodiversity Data (LOBD): A semantic application for integrating biodiversity information
  • Semantic harmonization of geoscientific data sets using Linked Data and project specific vocabularies
  • VIRTUALIZING DOCUMENT ALGORITHMS USING PREDICTIVE SEMANTIC DATA
  • PERANCANGAN DAN PENERAPAN SEMANTIC WEB PADA DATABASE JUDUL KP TEKNIK INFORMATIKA DAN TEKNIK INDUSTRI DI UNIVERSITAS IBNU …
  • Lemons: Leveraging Model-Based Techniques to Enable Non-Intrusive Semantic Enrichment in Wireless Sensor Networks
  • Leverage label and word embedding for semantic sparse web service discovery
  • Using a multimedia semantic graph for web document visualization and summarization
  • Semantic interoperability in the IoT: Extending the Web of things architecture
  • Sampo-UI: A full stack JavaScript framework for developing semantic portal user interfaces
  • The Semantic Web identity crisis: in search of the trivialities that never were
  • A Streamlined Pipeline to Enable the Semantic Exploration of a store
  • 18 Semantic Web
  • Ontologies as nested facet systems for human–data interaction
  • A Methodology for Hierarchical Classification of Semantic Answer Types of Questions
  • JenTab: A Toolkit for Semantic Table Annotations
  • Explicitly semantic representation of pattern and combined geometrical specification
  • Assessing large-scale, cross-domain knowledge bases for semantic search
  • Enforcing social semantic in fipa-acl using spin
  • Evaluating Taxonomic Relationships Using Semantic Similarity Measures on Sensor Domain Ontologies
  • Improving Core Topics Discovery in Semantic Markup Literature: A Combined Approach
  • ewot: A semantic interoperability approach for heterogeneous iot ecosystems based on the web of things
  • User-centered design of a web-based crowdsourcing-integrated semantic text annotation tool for building a mental health knowledge base
  • Discovering Web Services By Matching Semantic Relationships Through Ontology
  • A Prototypical Semantic Annotator for A Tribuna Newspaper
  • Hybrid reasoning in knowledge graphs: Combing symbolic reasoning and statistical reasoning
  • Closing the Loop between knowledge patterns in cognition and the Semantic Web
  • Entity Linking and Lexico-Semantic Patterns for Ontology Learning
  • Semantic Enrichment of Association Rules Discovered in Operational Building Data for Reuse of Building Performance Patterns
  • Toward improved semantic annotation of food and nutrition data
  • EDR: A generic approach for the distribution of rule-based reasoning in a Cloud–Fog continuum
  • A Semantic Demand-Service Matching Method based on OWL-S for Cloud Testing Service Platform
  • A map without a legend
  • Data acquisition protocols and semantic modelling of the historical-architectural heritage: the INCEPTION project
  • Towards a semantic Construction Digital Twin: Directions for future research
  • Machine Translation Aided Bilingual Data-to-Text Generation and Semantic Parsing
  • Screening product tolerances considering semantic variation propagation and fusion for assembly precision analysis
  • Semantic-based service discovery in grid environment
  • Applying Ontology Knowledge Representation Technology and Semantic Searching Methods to Support the Production of High Quality Longan Fruit
  • Large-scale semantic exploration of scientific literature using topic-based hashing algorithms
  • Visual analysis of ontology matching results with the melt dashboard
  • A Semantic Approach for Extracting Medical Association Rules
  • Semantic Node-RED for rapid development of interoperable industrial IoT applications
  • A Semantic Layer Querying Tool
  • Hybrid approach for big data localization and semantic annotation
  • From obXML to the OP Ontology: Developing a Semantic Model for Occupancy Profile⋆
  • Towards an information semantic interoperability in smart manufacturing systems: contributions, limitations and applications
  • Modeling and reasoning of IoT architecture in semantic ontology dimension
  • Semtab 2019: Resources to benchmark tabular data to knowledge graph matching systems
  • Modeling a semantic recommender system for medical prescriptions and drug interaction detection
  • Semantic Stream Processing and Reasoning
  • Towards NLP-supported Semantic Data Management
  • Evaluation of IoT stream processing at edge computing layer for semantic data enrichment
  • Semantic approach to compliance checking of underground utilities
  • Similarity of Parts Determined by Semantic Networks as the Basis for Manufacturing Cost Estimation
  • A decision support system on the obesity management and consultation during childhood and adolescence using ontology and semantic rules
  • Introduction to cloud manufacturing
  • UcEF for Semantic IR: An Integrated Context-Based Web Analytics Method
  • A semantic-based methodology for digital forensics analysis
  • A semantic approach for timeseries data fusion
  • A more decentralized vision for linked data
  • Context-Aware Personalized Web Search Using Navigation History
  • Semantic modeling for engineering data analytics solutions
  • Processing sparql aggregate queries with web preemption
  • Architecting smart city digital twins: combined semantic model and machine learning approach
  • A semantic focused web crawler based on a knowledge representation schema
  • A benchmark for end-user structured data exploration and search user interfaces
  • TechNet: Technology semantic network based on patent data
  • An intelligent semantic system for real-time demand response management of a thermal grid
  • Semantic linking of research infrastructure metadata
  • SemSub: Semantic Subscriptions for the MQTT Protocol
  • Yago 4: A reason-able knowledge base
  • Football Ontology Construction using Oriented Programming
  • SAREF4INMA: a SAREF extension for the industry and manufacturing domain
  • Expedient information retrieval system for web pages using the natural language modeling
  • A novel approach on Particle Agent Swarm Optimization (PASO) in semantic mining for web page recommender system of multimedia data: a health care perspective
  • Semantic Annotations on Heritage Models: 2D/3D Approaches and Future Research Challenges
  • Semantic analysis on social networks: A survey
  • SEMANTIC INTELLIGENCE FOR KNOWLEDGE-BASED COMPLIANCE CHECKING OF UNDERGROUND UTILITIES
  • DeepWSC: Clustering Web Services via Integrating Service Composability into Deep Semantic Features
  • Leveraging knowledge graphs for big data integration: the xi pipeline
  • KGvec2go–Knowledge Graph Embeddings as a Service
  • Semantic description of documents in enterprise knowledge infrastructures
  • An OGC web service geospatial data semantic similarity model for improving geospatial service discovery
  • Empirist corpus 2.0: Adding manual normalization, lemmatization and semantic tagging to a German web and CMC corpus
  • Results of semtab 2020
  • IQA: Interactive query construction in semantic question answering systems
  • Named entity extraction for knowledge graphs: A literature overview
  • Reliable and interoperable computational molecular engineering: 2. Semantic interoperability based on the European Materials and Modelling Ontology
  • A compact brain storm algorithm for matching ontologies
  • An ontology-based representation of vaulted system for HBIM
  • RuBQ: A Russian dataset for question answering over Wikidata
  • A neural network for semantic labelling of structured information
  • Enhancing Public Procurement in the European Union through Constructing and Exploiting an Integrated Knowledge Graph
  • Semantic and syntactic interoperability for agricultural open-data platforms in the context of IoT using crop-specific trait ontologies
  • Semantic Representation of Physics Research Data
  • NEO: A Tool for Taxonomy Enrichment with New Emerging Occupations
  • An ontology framework for pile integrity evaluation based on analytical methodology
  • Ontology-guided Semantic Composition for Zero-Shot Learning
  • Semantic Expressibility of OWL Ontologies
  • QSST: A Quranic Semantic Search Tool based on word embedding
  • Geosparql+: Syntax, semantics and system for integrated querying of graph, raster and vector data
  • Knowledge-based expert system to support the semantic interoperability in smart manufacturing
  • Benchmarking neural embeddings for link prediction in knowledge graphs under semantic and structural changes
  • Chinese semantic document classification based on strategies of semantic similarity computation and correlation analysis
  • OpenCitations, an infrastructure organization for open scholarship
  • Measuring semantic distances using linked open data and its application on music recommender systems
  • Social network analysis for personalized characterization and risk assessment of alcohol use disorders in adolescents using semantic technologies
  • Linked Open Data Infrastructure for Digital Humanities in Finland
  • Semantic Analysis
  • Turning transport data to comply with eu standards while enabling a multimodal transport knowledge graph
  • Jentab: Matching tabular data to knowledge graphs
  • Combining chronicle mining and semantics for predictive maintenance in manufacturing processes
  • Research synthesis and thematic analysis of twitter through bibliometric analysis
  • Capturing information processes with variable domains
  • Building a Linked Open Data portal of war victims in Finland 1914-1922
  • Automatic annotation service appi: Named entity linking in legal domain
  • Answering Controlled Natural Language Questions over RDF Clinical Data
  • Are we better off with just one ontology on the Web?
  • Building ontology-driven tutoring models for intelligent tutoring systems using data mining
  • MPSUM: entity summarization with predicate-based matching
  • Semantic Data Structures for Knowledge Generation in Open World Information System
  • Mining cross-image semantics for weakly supervised semantic segmentation
  • The OpenCitations data model
  • A data science approach to drug safety: Semantic and visual mining of adverse drug events from clinical trials of pain treatments
  • Knowledge Graph Approach to Combustion Chemistry and Interoperability
  • Semantic-based discovery method for high-performance computing resources in cyber-physical systems
  • Knowledge Graphs for Explainable Artificial Intelligence: Foundations, Applications and Challenges
  • Video representation and suspicious event detection using semantic technologies
  • Personalised Semantic User Interfaces for Games
  • Semantic-based process mining technique for annotation and modelling of domain processes
  • Semantic Competence Modelling–Observations from a Hands-on Study with HyperCMP Knowledge Graphs and Implications for Modelling Strategies and Semantic …
  • Ultra fine-grained image semantic embedding
  • Visualization systems for linked datasets
  • Automatic detection of relation assertion errors and induction of relation constraints
  • A Novel Web Anomaly Detection Approach Based on Semantic Structure
  • Semantic interoperability in the internet of things-state-of-the-art and prospects
  • Discovery and Enrichment of Knowledges from a Semantic Wiki
  • Semantic similarity and text summarization based novelty detection
  • Ontologías en la Web semántica
  • Difficulty-level modeling of ontology-based factual questions
  • Predicting semantic preferences in a socio-semantic system with collaborative filtering: A case study
  • Ontology-enhanced machine learning: a Bosch use case of welding quality monitoring
  • A survey of semantic relatedness evaluation datasets and procedures
  • Optimizing sensor ontology alignment through compact co-firefly algorithm
  • Compact homeomorphisms of semantic groups
  • Knowledge graph-based legal search over german court cases
  • Semantic concept schema of the linear mixed model of experimental observations
  • Smart Cabin: A Semantic-Based Framework for Indoor Comfort Customization inside a Cruise Cabin
  • Business Process Execution From the Alignment Between Business Processes and Web Services: A Semantic and Model-Driven Modernization Process
  • Test Case Generation of Composite Web Services Based on Semantic Matching and Condition Recognition
  • Ontology-based semantic modeling of knowledge in construction: classification and identification of hazards implied in images
  • Semantic Shopping: A Literature Study
  • Tough tables: Carefully evaluating entity linking for tabular data
  • Hereditary information processes with semantic modeling structures
  • Linked Credibility Reviews for Explainable Misinformation Detection
  • Building a morpho-semantic knowledge graph for Arabic information retrieval
  • On modeling the physical world as a collection of things: The w3c thing description ontology
  • Semantic micro-contributions with decentralized nanopublication services
  • Optimizing ontology alignment through linkage learning on entity correspondences
  • A metadata repository for semantic product lifecycle management
  • Toward owl restriction reconciliation in merging knowledge
  • Research on Service Discovery Methods Based on Knowledge Graph
  • S-COAP: Semantic Enrichment of COAP for Resource Discovery
  • Semantic knowledge networks in education
  • Qanswer KG: designing a portable question answering system over RDF data
  • Dividing the ontology alignment task with semantic embeddings and logic-based modules
  • An Ontology of Chinese Ceramic Vases
  • A semantic framework for extracting taxonomic relations from text corpus.
  • A Case for Semantic Annotation Of EHR
  • DREAM Principles from the PORTAL-DOORS Project and NPDS Cyberinfrastructure
  • AI-KG: an automatically generated knowledge graph of artificial intelligence
  • Semantic approach using unified and summarised ontologies for analysing data from social media
  • DeepSQLi: deep semantic learning for testing SQL injection
  • Knowledge graph matching with inter-service information transfer
  • An Ontology-based Information Model for Multi-Domain Semantic Modeling and Analysis of Smart City Data
  • A Common Semantic Model of the GDPR Register of Processing Activities
  • A partition based framework for large scale ontology matching
  • Mtab4wikidata at semtab 2020: Tabular data annotation with wikidata
  • KGTK: a toolkit for large knowledge graph manipulation and analysis
  • A semantic approach for document classification using deep neural networks and multimedia knowledge graph
  • REWARD: Ontology for reward schemes
  • Semantic enrichment of building and city information models: A ten-year review
  • Question Answering over Knowledge Bases by Leveraging Semantic Parsing and Neuro-Symbolic Reasoning
  • Towards logical association rule mining on ontology-based semantic trajectories
  • Safe interoperability for web of things devices and systems
  • RDF graph validation using rule-based reasoning
  • Medical decision support systems and semantic technologies in healthcare
  • Facilitating the analysis of covid-19 literature through a knowledge graph
  • CulturalERICA: A conversational agent improving the exploration of European cultural heritage
  • A challenge for historical research: making data FAIR using a collaborative ontology management environment (OntoME)
  • P2L: Predicting transfer learning for images and semantic relations
  • crowd: A Visual Tool for Involving Stakeholders into Ontology Engineering Tasks
  • Semantic Interpretation of Top-N Recommendations
  • Modular graphical ontology engineering evaluated
  • Semantic approach to RIoT autonomous robots mission coordination
  • Dynamic faceted search for technical support exploiting induced knowledge
  • Deep learning framework for RDF and knowledge graphs using fuzzy maps to support medical decision
  • Semantic Interoperability for IoT Agriculture Framework with Heterogeneous Devices
  • An Ontology-Based Framework for Publishing and Exploiting Linked Open Data: A Use Case on Water Resources Management
  • Hidden data states-based complex terminology extraction from textual web data model
  • Elas4RDF: Multi-perspective triple-centered keyword search over RDF using elasticsearch
  • Linked Open Data Service about Historical Finnish Academic People in 1640-1899.
  • A comparative study of meta-heuristic optimisation techniques for prioritisation of risks in agile software development
  • Artificial intelligence (AI) and Britons health: how can AI help to health in resource-based situations?
  • Commonsense knowledge base completion with structural and semantic context
  • Evaluating and comparing ontology alignment systems: An MCDM approach
  • Knowledge-infused Deep Learning
  • Impact of Deep Learning on Semantic Sentiment Analysis
  • How good is this merged ontology?
  • A Semantic Mixed Reality Framework for Shared Cultural Experiences Ecosystems
  • Semantic search using Natural Language Processing
  • ChImp: Visualizing ontology changes and their impact in protégé
  • Grundlagen des Web 1.0, Web 2.0, Web 3.0 und Web 4.0
  • An approach for measuring semantic similarity between Wikipedia concepts using multiple inheritances
  • Deep hierarchical encoding model for sentence semantic matching
  • Structured Semantic Modeling of Scientific Citation Intents
  • Cone-KG: A Semantic Knowledge Graph with News Content and Social Context for Studying Covid-19 News Articles on Social Media
  • Integration of Semantics Into Sensor Data for the IoT: A Systematic Literature Review
  • Ontology matching using convolutional neural networks
  • A semi-structured information semantic annotation method for Web pages
  • Crowd-sourcing and Automatic Generation of Semantic Information in Blended-Learning Environments
  • NG-Tax 2.0: A semantic framework for high-throughput amplicon analysis
  • OBA: An Ontology-Based Framework for Creating REST APIs for Knowledge Graphs
  • Semantic Feature Analysis Model: Linguistics Approach in Foreign Language Learning Material Development.
  • Apache jena: A free and open source java framework for building semantic web and linked data applications
  • Clustering Mashups by Integrating Structural and Semantic Similarities Using Fuzzy AHP
  • A Medieval Epigraphic Corpus and its Retro-Developments (CIFM-CBMA): The Exploratory Research of the Cosme2 Consortium
  • Beyond Lexical: A Semantic Retrieval Framework for Textual SearchEngine
  • Construction and Leverage Scientific Knowledge Graphs by Means of Semantic Technologies
  • Using knowledge anchors to facilitate user exploration of data graphs
  • 6G networks: Beyond Shannon towards semantic and goal-oriented communications
  • A Review of Geospatial Semantic Information Modeling and Elicitation Approaches
  • Semantic approaches for query expansion
  • An ontological approach for pathology assessment and diagnosis of tunnels
  • Efficient fuzzy based K-nearest neighbour technique for web services classification
  • Idea generation with technology semantic network
  • Semantic Contextual Reasoning to Provide Human Behavior
  • AgriEnt: A knowledge-based web platform for managing insect pests of field crops
  • Ontologies-based domain knowledge modeling and heterogeneous sensor data integration for bridge health monitoring systems
  • Common Agriculture Vocabulary for Enhancing Semantic-level Interoperability in Japan
  • The semantic data dictionary–an approach for describing and annotating data
  • The knowledge graph track at oaei
  • Semantic Enrichment Tool for Implementing Learning Mechanism for Trend Analysis
  • Semantic Framework for Creating an Instance of the IoE in Urban Transport: A Study of Traffic Management with Driverless Vehicles
  • Sensored Semantic Annotation for Traffic Control based on Knowledge Inference in Video
  • Achieving System‐of‐Systems Interoperability Levels Using Linked Data and Ontologies
  • Reliability and Safety of Autonomous Systems Based on Semantic Modelling for Self-Certification
  • Let’s build Bridges, not Walls: SPARQL Querying of TinkerPop Graph Databases with Sparql-Gremlin
  • The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020)
  • Automating Mashup Service Recommendation via Semantic and Structural Features
  • Towards a semantic integration of data from learning platforms
  • An Ontology Based Framework for Automatic Web Resources Identification
  • StreamPipes Connect: Semantics-Based Edge Adapters for the IIoT
  • Risk response for municipal solid waste crisis using ontology-based reasoning
  • A new system for massive RDF data management using Big Data query languages Pig, Hive, and Spark
  • Survey on complex ontology matching
  • A web repository for geo-located 3D digital cultural heritage models
  • Lightweight Data-Security Ontology for IoT
  • Automated query classification based web service similarity technique using machine learning
  • xmatcher: Matching extensible markup language schemas using semantic-based techniques
  • Context sensitive access control in smart home environments
  • A hybrid semantic query expansion approach for Arabic information retrieval
  • Geo-semantic-parsing: AI-powered geoparsing by traversing semantic knowledge graphs
  • Digital Cultural Heritage and Linked Data: Semantically-informed conceptualisations and practices with a focus on intangible cultural heritage
  • InVeRo: Making Semantic Role Labeling accessible with intelligible verbs and roles
  • Automatic Extraction of Engineering Rules From Unstructured Text: A Natural Language Processing Approach
  • Hybrid Approach for Sentiment Analysis of Twitter Posts Using a Dictionary-based Approach and Fuzzy Logic Methods: Study Case on Cloud Service Providers
  • MetaLink: A Travel Guide to the LOD Cloud
  • The enslaved ontology: Peoples of the historic slave trade
  • Transferring the semantic constraints in human manipulation behaviors to robots
  • XMLSchema2ShEx: Converting XML validation to RDF validation
  • AWARE: A Situational Awareness Framework for Facilitating Adaptive Behavior of Autonomous Vehicles in Manufacturing
  • A formal, scalable approach to semantic interoperability
  • Enabling Digital Business Transformation Through an Enterprise Knowledge Graph
  • Developing an Arabic Infectious Disease Ontology to Include Non-Standard Terminology
  • Giving meaning to unsupervised EO change detection rasters: a semantic-driven approach
  • Template-based question answering using recursive neural networks
  • Ontology-Based Analysis Semantic Correlation Interventions in the Field of Health
  • Mapping Crisp Structural Semantic Similarity Measures to Fuzzy Context: A Generic Approach
  • Semantic Descriptor for Intelligence Services
  • GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing
  • Introduction: What Is a Knowledge Graph?
  • NABU–Multilingual Graph-Based Neural RDF Verbalizer
  • Embedding Oriented Adaptable Semantic Annotation Framework for Amharic Web Documents
  • Entity extraction from Wikipedia list pages
  • Integrating Machine Learning Techniques in Semantic Fake News Detection
  • RDF-BF-hypergraph representation for relational database
  • Web of data
  • An Ontology for the Materials Design Domain
  • Workflow Discovery with Semantic Constraints: The SAT-Based Implementation of APE
  • Semantic Similarity of XML Documents Based on Structural and Content Analysis
  • From Paper to Digital Trail
  • Proactive and reactive context reasoning architecture for smart web services
  • A Semantic Approach of Building Dynamic Learner Profile Model Using WordNet
  • Vquanda: Verbalization question answering dataset
  • SemBioNLQA: a semantic biomedical question answering system for retrieving exact and ideal answers to natural language questions
  • Implementation of aspect-oriented business process models with web services
  • Analysis on Semantic level Information Retrieval and Query Processing
  • Ontology based Concept Extraction and Classification of Ayurvedic Documents
  • Building linked spatio-temporal data from vectorized historical maps
  • The virtual knowledge graph system ontop
  • Climatechange vs. Globalwarming: Characterizing Two Competing Climate Discourses on Twitter with Semantic Network and Temporal Analyses
  • Astrea: automatic generation of SHACL shapes from ontologies
  • Digital Skills Workshop: Modelling, Capturing, Cataloguing, Processing and Certification
  • Detecting malicious JavaScript code based on semantic analysis
  • Detecting fake news for the new coronavirus by reasoning on the Covid-19 ontology
  • Automating GDPR Compliance using Policy Integrated Blockchain
  • A literature review of current technologies on health data integration for patient-centered health management
  • MTab4DBpedia: Semantic Annotation for Tabular Data with DBpedia
  • A Framework of Utilizing Big Data of Social Media to Find Out the Habits of Users Using Keyword
  • Robot Scheduling System Based on Semantic Recognition
  • Intelligent role-based access control model and framework using semantic business roles in multi-domain environments
  • Designing a framework for communal software: based on the assessment using relation modelling
  • Semantic Software Capability Profile Based on Enterprise Architecture for Software Reuse
  • Data integration for offshore decommissioning waste management
  • Adding semantics to enrich public transport and accessibility data from the Web
  • Clinical features and the traditional Chinese medicine therapeutic characteristics of 293 COVID-19 inpatient cases
  • Compressed indexes for fast search of semantic data
  • Towards an e-Government semantic interoperability assessment framework
  • Toward Semantic IoT Load Inference Attention Management for Facilitating Healthcare and Public Health Collaboration: A Survey
  • Optimizing biomedical ontology alignment through a compact multiobjective particle swarm optimization algorithm driven by knee solution
  • National Budget as Linked Open Data: New Tools for Supporting the Sustainability of Public Finances
  • Tag’s Depth-Based Expert Profiling Using a Topic Modeling Technique
  • Design Trend Forecasting by Combining Conceptual Analysis and Semantic Projections: New Tools for Open Innovation
  • Collective Entity Disambiguation Based on Hierarchical Semantic Similarity
  • A brief survey on semantic segmentation with deep learning
  • A general benchmarking framework for text generation
  • An approach for radicalization detection based on emotion signals and semantic similarity
  • Graph Generators: State of the art and open challenges
  • Converting Asturian Notaries Public deeds to Linked Data using TEI and ShExML
  • A semantic search tool for E-government public services in Albania
  • Keyword search over RDF using document-centric information retrieval systems
  • Improving Entity Linking through Semantic Reinforced Entity Embeddings
  • Enrich cross-lingual entity links for online wikis via multi-modal semantic matching
  • Contextual Preferences to Personalise Semantic Data Lake Exploration
  • Context-Aware Web Service Clustering and Visualization
  • A Semantic Matchmaking Technique for Cloud Service Discovery and Selection Using Ontology Based on Service-Oriented Architecture
  • Building and querying semantic layers for web archives (extended version)
  • Semantic representation of engineering knowledge, pre-study
  • Tag Me If You Can! Semantic Annotation of Biodiversity Metadata with the QEMP Corpus and the BiodivTagger
  • Multilingual corpus creation for multilingual semantic similarity task
  • Occupant Feedback and Context Awareness: On the Application of Building Information Modeling and Semantic Technologies for Improved Complaint Management in …
  • SemKoRe: Improving Machine Maintenance in Industrial IoT with Semantic Knowledge Graphs
  • AI-Based Semantic Multimedia Indexing and Retrieval for Social Media on Smartphones
  • Methods of Processing Large Collections of Scientific Documents and the Formation of Digital Mathematical Library
  • Extracting a justification for OWL ontologies by critical axioms
  • A Semantic-Enabled Smart Home for AAL and Continuity of Care
  • Need for Computational and Psycho-linguistics Models in Natural Language Processing for Web Documents
  • An ontology-based framework for automated code generation of Web AR applications
  • ESBM: an entity summarization benchmark
  • LinkZoo: A Collaborative Resource Management Tool Based on Linked Data
  • EventKG+ BT: Generation of Interactive Biography Timelines from a Knowledge Graph
  • A Model-Driven Approach for Semantic Data-as-a-Service Generation
  • Technology-enhanced learning in higher education: A bibliometric analysis with latent semantic approach
  • Towards a Linked Open Code
  • 1 The rise of the Pragmatic Web: Implications for rethinking meaning and interaction
  • Building a Semantic Repository for Outpatient Sheets
  • Exploiting a Multilingual Semantic Machine Translation Architecture for Knowledge Representation of Patient Data for Covid-19
  • Ontology-Based Decision Support System for the Nitrogen Fertilization of Winter Wheat
  • Optimization of Information Retrieval Algorithm for Digital Library Based on Semantic Search Engine
  • Semantic Relatedness for Keyword Disambiguation: Exploiting Different Embeddings
  • Evolving Meaning for Supervised Learning in Complex Biomedical Domains Using Knowledge Graphs
  • Analysing lexical semantic change with contextualised word representations
  • A Semantic Question Answering through Heterogeneous Data Source in the Domain of Smart Factory
  • A catalogue of energy conservation measures (ECM) and a tool for their application in energy simulation models
  • From grammar inference to semantic inference—An evolutionary approach
  • Capture and visualisation of text understanding through semantic annotations and semantic networks for teaching and learning
  • Autonomous navigation framework for intelligent robots based on a semantic environment modeling
  • Semantic Web and Business Intelligence in Big-Data and Cloud Computing Era
  • Single-stage semantic segmentation from image labels
  • A similarity measure in formal concept analysis containing general semantic information and domain information
  • Splitting vs. merging: Mining object regions with discrepancy and intersection loss for weakly supervised semantic segmentation
  • Crime event localization and deduplication
  • Linking Dutch civil certificates
  • A Parallel World Framework for scenario analysis in knowledge graphs
  • Semantic Interoperability to Enable Smart, Grid-Interactive Efficient Buildings
  • Semantic Mining Approach Based On Learning of An Enhanced Semantic Model For Textual Business Intelligence
  • A novel machine natural language mediation for semantic document exchange in smart city
  • Egyptian Shabtis Identification by Means of Deep Neural Networks and Semantic Integration with Europeana
  • Autoencoding word representations through time for semantic change detection
  • A new semantic annotation approach for software vulnerability source code
  • The enslaved dataset: A real-world complex ontology alignment benchmark using wikibase
  • A Hybrid Semantic Knowledge Integration and Sharing Approach for Distributed Smart Environments
  • An approach for generation of SPARQL query from SQL algebra based transformation rules of RDB to ontology
  • Semi-automatic RDFization Using Automatically Generated Mappings
  • DAMN: defeasible reasoning tool for multi-agent reasoning
  • Ontologies for observations and actuations in buildings: A survey
  • hmatcher: Matching schemas holistically
  • Ontological Design of Information Retrieval Model for Real Estate Documents
  • Semantic framework for data flow control in the network of information graphs
  • Top-Rank-Focused Adaptive Vote Collection for the Evaluation of Domain-Specific Semantic Models
  • Embedding-based recommendations on scholarly knowledge graphs
  • CoMerger: a customizable online tool for building a consistent quality-assured merged ontology
  • On the Combined Use of Extrinsic Semantic Resources for Medical Information Search
  • Novel entity discovery from web tables
  • Pattern sampling in distributed databases
  • The impact of semantic annotation techniques on content-based video lecture recommendation
  • A Knowledge-based Model for Semantic Oriented Contextual Advertising
  • Semantic information for robot navigation: A survey
  • Interactive E-Text Platform Based on Block Editing Model in Crowdsourcing E-Learning Environments
  • Gravsearch: transforming SPARQL to query humanities data
  • Contextual Propagation of Properties for Knowledge Graphs
  • NoHR: an overview
  • Linked research on the decentralised Web
  • Cross-modal image sentiment analysis via deep correlation of textual semantic
  • Sequential Modelling of the Evolution of Word Representations for Semantic Change Detection
  • Adding value to Linked Open Data using a multidimensional model approach based on the RDF Data Cube vocabulary
  • Web 3.0 And The Semantic Web How The Future İnternet Will Change Everything
  • BioHackathon 2015: Semantics of data for life sciences and reproducible research
  • Efficient Representation of Very Large Linked Datasets as Graphs.
  • Question-Answer patterns in GIS: Semantic analysis of geo-analytical questions in Human Geography
  • CustNER: A Rule-Based Named-Entity Recognizer With Improved Recall
  • Equivalent rewritings on path views with binding patterns
  • Owl2bench: A benchmark for owl 2 reasoners
  • Dynamic Knowledge Graphs as Semantic Memory Model for Industrial Robots
  • Semantic Web: Befähiger der Industrie 4.0
  • A Big Data Solution To Process Semantic Web Data Using The Model Driven Engineering Approach
  • Building Semantic Web Applications
  • Towards a holistic semantic support for context-aware network monitoring
  • Conciliating perspectives from mapping agencies and web of data on successful European SDIs: Toward a European geographic knowledge graph
  • Construction and Usage of a Human Body Common Coordinate Framework Comprising Clinical, Semantic, and Spatial Ontologies
  • MELODI Presto: A fast and agile tool to explore semantic triples derived from biomedical literature
  • An SKOS-Based Vocabulary on the Swift Programming Language
  • Cord19sts: Covid-19 semantic textual similarity dataset
  • Proposal of the first international workshop on semantic indexing and information retrieval for health from heterogeneous content types and languages (SIIRH)
  • A Design of Similar Video Recommendation System using Extracted Words in Big Data Cluster
  • Intelligence graphs for threat intelligence and security policy validation of cyber systems
  • A blueprint for the architecture of fully infrastructural and foundational Digital Identification systems based on the blockchained Semantic Approach
  • Delivering public services through social media in European local governments. An interpretative framework using semantic algorithms
  • Salient context-based semantic matching for information retrieval
  • Enhancement Semantic Prediction Big Data Method for COVID-19: Onto-NoSQL.
  • Excut: Explainable embedding-based clustering over knowledge graphs
  • Comprehensive analysis of rule formalisms to represent clinical guidelines: Selection criteria and case study on antibiotic clinical guidelines
  • Semantic Mapping of Component Framework Interface Ontologies for Interoperability of Vehicle Applications
  • Semantic and Qualitative Physics-Based Reasoning on Plain-English Flow Terms for Generating Function Model Alternatives
  • A Semantic-based Multi-agent Dynamic Interaction Model
  • Fast and exact rule mining with amie 3
  • Handling impossible derivations during stream reasoning
  • NUBOT: Embedded Knowledge Graph With RASA Framework for Generating Semantic Intents Responses in Roman Urdu
  • Pini Language and PiniTree Ontology Editor: Annotation and Verbalisation for Atomised Journalism
  • Ontology-Based Semantic Retrieval for Durian Pests and Diseases Control System
  • Application Ontology for Multi-Agent and Web-Services’ Co-Simulation in Power and Energy Systems
  • XChange: A semantic diff approach for XML documents
  • FunMap: Efficient Execution of Functional Mappings for Knowledge Graph Creation
  • Toward a Knowledge-based Personalised Recommender System for Mobile App Development
  • N-Sanitization: A semantic privacy-preserving framework for unstructured medical datasets
  • Validation of a Semantic Search Engine for Academic Resources on Engineering Teamwork
  • Understanding the spatial dimension of natural language by measuring the spatial semantic similarity of words through a scalable geospatial context window
  • Fused GRU with semantic-temporal attention for video captioning
  • What are links in Linked Open Data? A characterization and evaluation of links between knowledge graphs on the Web
  • Linking ontological classes and archaeological forms
  • A proposed method using the semantic similarity of WordNet 3.1 to handle the ambiguity to apply in social media text
  • Towards optimize-ESA for text semantic similarity: A case study of biomedical text
  • A network intrusion detection method based on semantic re-encoding and deep learning
  • ExtruOnt: An ontology for describing a type of manufacturing machine for Industry 4.0 systems
  • Research trends in text mining: Semantic network and main path analysis of selected journals
  • Semantic Enrichment of Linked Personal Authority Data: a case study of elites in late Imperial China
  • Hyperbolic knowledge graph embeddings for knowledge base completion
  • An intelligent personalized web blog searching technique using fuzzy-based feedback recurrent neural network
  • Vietnamese tourism linked open data
  • Semantic Interoperability for DR Schemes Employing the SGAM Framework
  • An efficient radix trie‐based semantic visual indexing model for large‐scale image retrieval in cloud environment
  • Semantic-based Architecture Smell Analysis
  • A Flexible Semantic Inference Methodology to Reason on the User Preferences in Recommender Systems
  • A categorization of simultaneous localization and mapping knowledge for mobile robots
  • Decision support for network path estimation via automated reasoning
  • Modular ontology modeling: A tutorial
  • Comparison of development methodologies in web applications
  • SwarmGen: a framework for automatic generation of semantic services in an IoT network
  • Learning semantic information from Internet Domain Names using word embeddings
  • Extending SPARQL with Similarity Joins
  • Categorically Provisional
  • Secure Timestamp-Based Mutual Authentication Protocol for IoT Devices Using RFID Tags
  • y2: A Plan-and-Pretrain Approach for Knowledge Graph-to-Text Generation
  • Enhanced semantic representation of coaxiality with double material requirements
  • Don’t parse, generate! a sequence to sequence architecture for task-oriented semantic parsing
  • WoT Store: Managing resources and applications on the web of things
  • A Decentralized Semantic Reasoning Approach for the Detection and Representation of Continuous Spatial Dynamic Phenomena in Wireless Sensor Networks
  • Knowledge-Based Semantic Relatedness measure using Semantic features
  • Enhanced twitter sentiment analysis using hybrid approach and by accounting local contextual semantic
  • Rule based Semantic Reasoning for Personalized Recommendation in Indoor O2O e-commerce
  • DBpedia Archivo: A Web-Scale Interface for Ontology Archiving Under Consumer-Oriented Aspects
  • Completeness and soundness guarantees for conjunctive SPARQL queries over RDF data sources with completeness statements
  • Semantic-driven modelling of context and entity of interest profiles for maritime situation awareness
  • Semantic Management of Urban Traffic Congestion
  • A model of semantic web service in a distributed computer system.
  • Generating Compact and Relaxable Answers to Keyword Queries over Knowledge Graphs
  • Semantic Information in Sensor Networks: How to Combine Existing Ontologies, Vocabularies and Data Schemes to Fit a Metrology Use Case
  • Gold-level open access at the Semantic Web journal
  • The construction of sentiment lexicon based on context-dependent part-of-speech chunks for semantic disambiguation
  • Incremental Multi-source Entity Resolution for Knowledge Graph Completion
  • Empowering Museum Experiences Applying Gamification Techniques Based on Linked Data and Smart Objects
  • Semantic traffic sensor data: The TRAFAIR experience
  • Semantic Data Analytics Engine with Domain-specific Implementation: a Case Study in Diabetes
  • Data mining model for food safety incidents based on structural analysis and semantic similarity
  • Semantic Similarity Between Adjectives and Adverbs—The Introduction of a New Measure
  • A SELF-BALANCED CLUSTERING TREE FOR SEMANTIC-BASED IMAGE RETRIEVAL
  • BCRL: Long Text Friendly Knowledge Graph Representation Learning
  • Hinting Semantic Parsing with Statistical Word Sense Disambiguation
  • Studying the association of online brand importance with museum visitors: An application of the semantic brand score
  • TextEssence: A Tool for Interactive Analysis of Semantic Shifts Between Corpora
  • Topic-aware web service representation learning
  • Development of Semantically Rich 3D Retrofit Models
  • Query Optimization for Large Scale Clustered RDF Data.
  • A Spatiotemporal Knowledge Bank from Rape News Articles for Decision Support
  • How emotion is learned: Semantic learning of novel words in emotional contexts
  • Leveraging Linguistic Linked Data for Cross-Lingual Model Transfer in the Pharmaceutical Domain
  • POSMASWEB: Paranoid Operating System Methodology for Anonymous and Secure Web Browsing
  • Semantic Triple Encoder for Fast Open-Set Link Prediction
  • AIDA: a Knowledge Graph about Research Dynamics in Academia and Industry
  • Linked Vocabularies for Mobility and Transport Research
  • Approach to Reasoning about Uncertain Temporal Data in OWL 2
  • Semantic contact and semantic barriers: reactionary responses to disruptive ideas
  • A Semantic Question Answering in a Restricted Smart Factory Domain Attaching to Various Data Sources
  • Towards an effective user interface for data exploration, data quality assessment and data integration
  • Towards a Knowledge Graph Lifecycle: A pipeline for the population of a commercial Knowledge Graph
  • Semantic relatedness algorithm for keyword sets of geographic metadata
  • Hybrid method for text summarization based on statistical and semantic treatment
  • A research review and taxonomy development for decision support and business analytics using semantic text mining
  • Bridge damage: Detection, IFC-based semantic enrichment and visualization
  • The UCD-Net System at SemEval-2020 Task 1: Temporal Referencing with Semantic Network Distances
  • Improving privacy in health care with an ontology‐based provenance management system
  • A practical primer on processing semantic property norm data
  • Bridging the Semantic Gap Between Customer Needs and Design Specifications Using User-generated Content
  • From Data Flows to Privacy Issues: A User-Centric Semantic Model for Representing and Discovering Privacy Issues.
  • Semantic technology
  • Learning expressive linkage rules from sparse data
  • Modeling execution techniques of inscriptions
  • HDTcrypt: Compression and encryption of RDF datasets
  • A Novel Path-based Entity Relatedness Measure for Efficient Collective Entity Linking
  • A better way of extracting dominant colors using salient objects with semantic segmentation
  • Classification constrained discriminator for domain adaptive semantic segmentation
  • Open Geodata Reuse: Towards Natural Language Interfaces to Web APIs
  • Capturing push-processing using enriched semantic mesh equipped with functionals-and-hops model
  • Heterogeneous Network Embedding for Deep Semantic Relevance Match in E-commerce Search
  • NG-Tax 2.0: A Semantic Framework for High-Throughput Amplicon Analysis
  • A Fuzzy, Incremental and Semantic Trending Topic Detection in Social Feeds
  • Semantic-aware security orchestration in SDN/NFV-enabled IoT systems
  • Generating Referring Expressions from RDF Knowledge Graphs for Data Linking
  • A Novel Conceptual Weighting Model for Semantic Information Retrieval
  • An Iterative Approach for Crowdsourced Semantic Labels Aggregation
  • Semantic-based padding in convolutional neural networks for improving the performance in natural language processing. A case of study in sentiment analysis
  • Lexical semantic recognition
  • Ten Ways of Leveraging Ontologies for Rapid Natural Language Processing Customization for Multiple Use Cases in Disjoint Domains
  • Semantic classification of monuments’ decoration materials using convolutional neural networks: a case study for meteora byzantine churches
  • Evolution of Semantic Similarity—A Survey
  • Web2Touch 2020–21: Semantic Technologies for Smart Information Sharing and Web Collaboration
  • Semantic memory impairment and neuroregulation in patients with mild cognitive impairment
  • Semantic model to extract tips from hotel reviews
  • Forum Duplicate Question Detection by Domain Adaptive Semantic Matching
  • Semantic Unsupervised Automatic Keyphrases Extraction by Integrating Word Embedding with Clustering Methods
  • Semantic Simulations Based on Object-Oriented Analysis and Modeling.
  • Software for creating and analyzing semantic representations
  • Test Oracle using Semantic Analysis from Natural Language Requirements.
  • SEMANTIC DOCUMENT CLUSTERING BASED INDEXING FOR TAMIL LANGUAGE INFORMATION RETRIEVAL SYSTEM
  • Localizing Q&A Semantic Parsers for Any Language in a Day
  • SMART-KG: hybrid shipping for SPARQL querying on the web
  • Semantic Knowledge Management for Herbal Medicines Used in Primary Health Care
  • SCORE: PRE-TRAINING FOR CONTEXT REPRESENTA-TION IN CONVERSATIONAL SEMANTIC PARSING
  • A Simple Method for Inducing Class Taxonomies in Knowledge Graphs
  • Semantic Modeling and Control of Urban Water Supply Networks
  • A URI parsing technique and algorithm for anti-pattern detection in RESTful Web services
  • A New Hybrid Improved Method for Measuring Concept Semantic Similarity in WordNet
  • Evolution of Semantic Similarity–A Survey
  • On Distributed SPARQL Query Processing Using Triangles of RDF Triples
  • Making neural networks FAIR
  • Semantic Relations and Deep Learning
  • Cost-and Robustness-Based Query Optimization for Linked Data Fragments
  • Two Ways for the Automatic Generation of Application Ontologies by Using BalkaNet
  • What did we learn from forty years of research on semantic interference? A Bayesian meta-analysis
  • Learning Short-Term Differences and Long-Term Dependencies for Entity Alignment
  • Semantic Analysis to Identify Students’ Feedback
  • Applying process mining and semantic reasoning for process model customisation in healthcare
  • Axiomatic Relation Extraction from Text in the Domain of Tourism
  • Integrating and managing BIM in 3D web-based GIS for hydraulic and hydropower engineering projects
  • Exploration and discovery of the COVID-19 literature through semantic visualization
  • Knowledge Graph OLAP
  • Text-to-text pre-training model with plan selection for rdf-to-text generation
  • The Effect of Gender, Age, and Education on the Adoption of Mobile Government Services
  • WordRecommender: An Explainable Content-Based Algorithm based on Sentiment Analysis and Semantic Similarity
  • The neural correlates of semantic control revisited
  • Index Point Detection and Semantic Indexing of Videos—A Comparative Review
  • Contextual Identification of Windows Malware through Semantic Interpretation of API Call Sequence
  • Cyber-Physical-Social Semantic Link Network
  • Hierarchical gated recurrent unit with semantic attention for event prediction
  • ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces
  • What Researchers are Currently Saying about Ontologies: A Review of Recent Web of Science Articles
  • Rule-Guided Graph Neural Networks for Recommender Systems
  • A technique for semantic annotation and retrieval of e-learning objects
  • WebNLG Challenge 2020: Language Agnostic Delexicalisation for Multilingual RDF-to-text generation
  • Weakly supervised semantic segmentation with boundary exploration
  • 4: SEMANTIC REPOSITORY
  • Keyword extraction for search engine optimization using latent semantic analysis
  • Children Semantic Network Growth: A Graph Theory Analysis
  • SEMANTIC AMBIGUITY OF ENGLISH-LANGUAGE CHATBOTS
  • Complementing lexical retrieval with semantic residual embedding
  • Multi-level diversification approach of semantic-based image retrieval results
  • Semantic Sentiment Analysis Based on Probabilistic Graphical Models and Recurrent Neural Network
  • Semantic annotation of summarized sensor data stream for effective query processing
  • Towards Fully-fledged Archiving for RDF Datasets
  • Enhanced query processing over semantic cache for cloud based relational databases
  • Semantic relation extraction using sequential and tree-structured LSTM with attention
  • Document classification using convolutional neural networks with small window sizes and latent semantic analysis
  • A Pilot Study of Text-to-SQL Semantic Parsing for Vietnamese
  • Making Every Label Count: Handling Semantic Imprecision by Integrating Domain Knowledge
  • The Development of Web-Based Learning Models as A Learning Medium for Students of Audio Video Electronics Competencies
  • Specification of side-effect management techniques for semantic graph sanitization
  • ELMo and BERT in semantic change detection for Russian
  • Reliability does matter: An end-to-end weakly supervised semantic segmentation approach
  • A SoLiD App to Participate in a Scalable Semantic Supply Chain Network on the Blockchain
  • How to Play Tag: A Formalization of Semantic Interoperability to Catch Semantics in Building Automation
  • Distant Supervised Relation Extraction via DiSAN-2CNN on a Feature Level
  • SHREC 2020: 3D point cloud semantic segmentation for street scenes
  • AutoQA: From databases to QA semantic parsers with only synthetic training data
  • Semantic Layer Construction for Big Data Integration
  • Identifying Localized Entrepreneurial Projects Through Semantic Social Network Analysis
  • Efficient Semantic Enrichment Process for Spatiotemporal Trajectories in Geospatial Environment
  • Block Annotation: Better Image Annotation for Semantic Segmentation with Sub-Image Decomposition
  • Distinguishing between paradigmatic semantic relations across word classes: human ratings and distributional similarity
  • Conversational semantic parsing over tables by decoupling and grouping actions
  • Challenges for computational lexical semantic change
  • Enhanced text matching based on semantic transformation
  • A comparison of object-triple mapping libraries
  • Where new words are born: Distributional semantic analysis of neologisms and their semantic neighborhoods
  • A benchmark for large-scale heritage point cloud semantic segmentation
  • ChoseAmobile: A Web-based Recommendation System for Mobile Phone Products
  • bbw: Matching CSV to Wikidata via Meta-lookup
  • Selection of Countermeasures against Harmful Information based on the Assessment of Semantic Content of Information Objects in the Conditions of Uncertainty
  • An Emotion-Aware Learning Analytics System Based on Semantic Task Automation
  • The Impact of Supercategory Inclusion on Semantic Classifier Performance
  • Building multi-subtopic Bi-level network for micro-blog hot topic based on feature Co-Occurrence and semantic community division
  • Creating semantic representations
  • SDC-depth: Semantic divide-and-conquer network for monocular depth estimation
  • Textual case-based adaptation using semantic relatedness-a case study in the domain of security documents
  • Content-based Image Retrieval and the Semantic Gap in the Deep Learning Era
  • A comprehensive review of type-2 fuzzy ontology
  • Semantic Data Pre-Processing for Machine Learning Based Bankruptcy Prediction Computational Model
  • Compass-aligned Distributional Embeddings for Studying Semantic Differences across Corpora
  • Identifying expertise through semantic modeling: A modified BBPSO algorithm for the reviewer assignment problem
  • Towards Semantic Noise Cleansing of Categorical Data based on Semantic Infusion
  • The penetration of Internet of Things in robotics: Towards a web of robotic things
  • Hybrid deep-semantic matrix factorization for tag-aware personalized recommendation
  • Developing Web Applications with Awareness of Data Quality Elements–DQAWA
  • An Investigation of Semantic Interoperability with EHR systems for Precision Dosing
  • Web of scholars: A scholar knowledge graph
  • UiO-UvA at SemEval-2020 task 1: Contextualised embeddings for lexical semantic change detection
  • A Comparison of Approaches for Measuring the Semantic Similarity of Short Texts Based on Word Embeddings
  • Semantic Web topics for presentation

Research Topics Computer Science

 
   
 

Topic Covered

Top 10 research topics of Semantic Web | list of research topics of Semantic Web | trending research topics of Semantic Web | research topics for dissertation in Semantic Web | dissertation topics of Semantic Web in pdf | dissertation topics in Semantic Web | research area of interest Semantic Web | example of research paper topics in Semantic Web | top 10 research thesis topics of Semantic Web | list of research thesis  topics of Semantic Web| trending research thesis topics of Semantic Web | research thesis  topics for dissertation in Semantic Web | thesis topics of Semantic Web in pdf | thesis topics in Semantic Web | examples of thesis topics of Semantic Web | PhD research topics examples of  Semantic Web | PhD research topics in Semantic Web | PhD research topics in computer science | PhD research topics in software engineering | PhD research topics in information technology | Masters (MS) research topics in computer science | Masters (MS) research topics in software engineering | Masters (MS) research topics in information technology | Masters (MS) thesis topics in Semantic Web.

Related Posts:

  • Web Ontology Research Topics for MS PhD
  • Semantic Net MCQs Artificial Intelligence
  • Test Paths, Example,SESE graphs, Visit, Tour, Syntactic reach, Semantic reach in software testing
  • Semantic Interpretation MCQs
  • WEP Reference Model, WER Web engineering resources Portal in Web Engineering

semantic web Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

A NOVEL APPROACH FOR SEMANTIC WEB APPLICATION IN ONLINE EDUCATION BASED ON STEGANOGRAPHY

Semantic Web technology is not new as most of us contemplate; it has evolved over the years. Linked Data web terminology is the name set recently to the Semantic Web. Semantic Web is a continuation of Web 2.0 and it is to replace existing technologies. It is built on Natural Language processing and provides solutions to most of the prevailing issues. Web 3.0 is the version of Semantic Web caters to the information needs of half of the population on earth. This paper links two important current concerns, the security of information and enforced online education due to COVID-19 with Semantic Web. The Steganography requirement for the Semantic web is discussed elaborately, even though encryption is applied which is inadequate in providing protection. Web 2.0 issues concerning online education and semantic Web solutions have been discussed. An extensive literature survey has been conducted related to the architecture of Web 3.0, detailed history of online education, and Security architecture. Finally, Semantic Web is here to stay and data hiding along with encryption makes it robust.

Rule-based information extraction for mechanical-electrical-plumbing-specific semantic web

Cultural heritage storytelling, engagement and management in the era of big data and the semantic web.

Cultural heritage (CH) refers to a highly multidisciplinary research and application field, intending to collect, archive, and disseminate the traditions, monuments/artworks, and overall civilization legacies that have been preserved throughout the years of humankind [...]

Knowledge-based recommendation system using semantic web rules based on Learning styles for MOOCs

Digital cultural heritage standards: from silo to semantic web.

AbstractThis paper is a survey of standards being used in the domain of digital cultural heritage with focus on the Metadata Encoding and Transmission Standard (METS) created by the Library of Congress in the United States of America. The process of digitization of cultural heritage requires silo breaking in a number of areas—one area is that of academic disciplines to enable the performance of rich interdisciplinary work. This lays the foundation for the emancipation of the second form of silo which are the silos of knowledge, both traditional and born digital, held in individual institutions, such as galleries, libraries, archives and museums. Disciplinary silo breaking is the key to unlocking these institutional knowledge silos. Interdisciplinary teams, such as developers and librarians, work together to make the data accessible as open data on the “semantic web”. Description logic is the area of mathematics which underpins many ontology building applications today. Creating these ontologies requires a human–machine symbiosis. Currently in the cultural heritage domain, the institutions’ role is that of provider of this  open data to the national aggregator which in turn can make the data available to the trans-European aggregator known as Europeana. Current ingests to the aggregators are in the form of machine readable cataloguing metadata which is limited in the richness it provides to disparate object descriptions. METS can provide this richness.

A multi-level semantic web for hard-to-specify domain concept, Pedestrian, in ML-based software

Raif-semantics: a robust automated interlinking framework for semantic web using mapreduce and multi-node data processing.

The era of the web has evolved and the industry strives to work better every day, the constant need for data to be accessible at a random moment is expanding, and with this expansion, the need to create a meaningful query technique in the web is a major concerns. To transmit meaningful data or rich semantics, machines/projects need to have the ability to reach the correct information and make adequate connections, this problem is addressed after the emergence of Web 3.0, the semantic web is developing and being collected an immense. Information to prepare, this passes the giant data management test, to provide an ideal result at any time needed. Accordingly, in this article, we present an ideal system for managing huge information using MapReduce structures that internally help an engine bring information using the strength of fair preparation using smaller map occupations and connection disclosure measures. Calculations for similarity can be challenging, this work performs five similarity detection algorithms and determines the time it takes to address the patterns that has to be a better choice in the calculation decision. The proposed framework is created using the most recent and widespread information design, that is, the JSON design, the HIVE query language to obtain and process the information planned according to the customer’s needs and calculations for the disclosure of the interface. Finally, the results on a web page is made available that helps a user stack json information and make connections somewhere in the range of dataset 1 and dataset 2. The results are examined in 2 different sets, the results show that the proposed approach helps to interconnect significantly faster; Regardless of how large the information is, the time it takes is not radically extended. The results demonstrate the interlinking of the dataset 1 and dataset 2 is most notable using LD and JW, the time required is ideal in both calculations, this paper has mechanized the method involved with interconnecting via a web page, where customers can merge two sets of data that should be associated and used.

Semantic City Planning Systems (SCPS): A Literature Review

This review focuses on recent research literature on the use of Semantic Web Technologies (SWT) in city planning. The review foregrounds representational, evaluative, projective, and synthetical meta-practices as constituent practices of city planning. We structure our review around these four meta-practices that we consider fundamental to those processes. We find that significant research exists in all four metapractices. Linking across domains by combining various methods of semantic knowledge generation, processing, and management is necessary to bridge gaps between these meta-practices and will enable future Semantic City Planning Systems.

Semantic Web Technologies for Internet of Things Semantic Interoperability

Export citation format, share document.

semantic web thesis topics

  • Locations and Hours

Semantic Web and Linked Data: Journals, Articles and Papers

Best practices, standards and metadata application profiles (maps), blogs, listservs and wikis, instructional resources, journals, articles and papers, semantic web services, semantic web tools, ontologies and frameworks, registries, portals, and authorities, vocabularies, wikiprojects, wikidata properties, wikidata/wikimedia tools, workshops and projects.

The Semantic Web encompasses the technology that connects data from different sources across the Web as envisioned by Tim Berners-Lee and led by the World Wide Web Consortium (W3C). This Web of Data enables the linking of data sets across data silos on the Web by providing for machine-to-machine communication through the use of Linked Data. This Guide provides descriptions and links to resources used to implement this technology.

The UCLA Semantic Web LibGuide was compiled and written by Rhonda Super. It began as a data page on Ms. Super's personal resource home page. Over a twenty year period, the Semantic Web resources listed on Rhonda's Resource Page developed into a stand alone LibGuide that served as a comprehensive resource for the Semantic Web and Linked Data community providing links to tools, best standards, instructional materials, use cases, vocabularies, and more.The Guide was updated continuously through August 2022 using the SpringShare LibGuide platform as customized by the UCLA Library. Many of its resources provide a historical look at the development of Linked Data.

Ms. Super holds a BA in English and Government and an MA in Communications from Ohio University. She earned her MLIS from San Jose State University with a concentration in archives, rare books, and academic libraries. She earned a Certificate in XML and RDF Systems from the Library Juice Academy. Ms. Super was awarded scholarships to attend the California Rare Book School where she studied Rare Books for Scholars and Archivists, Descriptive Bibliography, and History of the Book: Nineteenth and Twentieth Centuries. Ms. Super was employed by the UCLA Library from 2007 until her retirement in 2022.

The final iteration of the Guide is deposited in the University of California eScholarship Open Access repository so the Linked Data community can continue to use it as a resource.

If you cite resources from this Guide, please check the original resource for copyright and citation requirements.

Scroll down the page to access the topics listed below.

  • Best Practices, Standards, and Metadata Application Profiles (MAPS)

Blogs, Listservs, and Wikis

Journals, articles, and papers.

  • Wikidata Tools

About the Semantic Web

The Semantic Web provides for the ability to semantically link relationships between Web resources, real world resources, and concepts through the use of Linked Data enabled by Resource Description Framework (RDF). RDF uses a simple subject-predicate-object statement known as a triple for its basic building block. This provides a much richer exploration of Web and real world resources than the Web of Documents to which we are accustomed.

LINKED OPEN DATA (LOD) CLOUD

semantic web thesis topics

About the LOD Cloud

The diagram on this page is a visualization of Linked Open Datasets published in the Linked Data format as of April, 2014. The large circle in the center is Dbpedia, the linked data version of Wikipedia. Click on the diagram to learn more about the diagram, licensed and open linked data, statistics about the datasets in the diagram, and the latest version of the LOD Cloud. As of June, 2018, you can view Sub-CLouds by subject area.

Linking Open Data cloud diagram 2014, by Max Schmachtenberg, Christian Bizer, Anja Jentzsch and Richard Cyganiak.

5-Star Open Data Rules

semantic web thesis topics

5-Star Open Data

Click on the image of the mug and open the link to access more information.

  • 5-Star Open Data Click here for an explanation of the costs and benefits of the 5-Star Open Data deployment scheme, and examples.
  • Open Data Certificate Open Data Institute. Open Data Certificate is a free online tool to assess and recognize the sustainable publication of quality open data. The tool benchmarks data against standards covering legal, practical, technical and social requirements to support the trust in and use of sustainable data. A badge that can be embedded in a website is awarded a data publisher based on answers provided by the publisher to a questionnaire. The Certificate builds on standards such as opendefinition.org, 5* of Open Data, Sunlight principles, and DCAT.

Getty Vocabularies Documentation

For the Getty Vocabularies, please see the Registries, Portals, and Authorities page under Vocabularies, Ontologies & Frameworks.

Best Practices and Standards

Trust is a major component of the Semantic Web. This requires providing accurate information when publishing a Linked Data instance. The World Wide Web Consortium (W3C), comprised of an international community, develops Web standards and best practices. Additionally, authorities in subject disciplines establish, administer, and maintain standards in their disciplines which adhere to W3C best practices.

This page provides access to information regarding best practices and standards relevant to Semantic Web technology as developed by W3C and other authoritative bodies. For controlled vocabularies, ontologies, etc., please consult the Vocabularies, Ontologies & Fameworks page.

  • ALCTS Standards Association for Library Collections & Technical Services (ALCTS). The ALCTS Standards is designed to be an aggregator providing a single place to find standards pertinent to the information industry. The guide is organized by topic.
  • Best Practice Recipes for Publishing RDF Vocabularies Berrueta, Diego and Jon Phipps. (2008, Aug. 28). W3C. This document describes best practice recipes for publishing vocabularies or ontologies on the Web in RDF Schema or OWL. Each recipe introduces general principles and an example configuration for use with an Apache HTTP server which may be adapted to other environments.
  • Best Practices for Recording Faceted Chronological Data in Bibliographic Records American Library Association Institutional Repositor Subcommittee on Faceted Vocabularies; Mullin, Casey; Anderson, Karen; Contursi, Lia; McGrath, Kelley; Prager, George; Schiff, Adam. (2020, June 19). This document describes best practices for encoding the date(s) of creation of works and expressions in bibliographic descriptions. The categories of dates, currently serviced by MARC 046 and 388 fields, covered by these practices are: date(s) of creation of individual works; date(s) of creation of the aggregated works in a compilation; date(s) of creation of aggregating works (compilations, anthologies, etc.); and date(s)of creation of expressions.
  • Data on the Web Best Practices This W3C document provides best practices on a range of topics including data formats, data access, data identification and metadata by providing guidelines on how to represent, describe and make data available in a way that it will be easy to find and to understand. The document provides a series of best practices. A template is used to show the "what", "why" and "how" of each best practice.
  • Generating RDF from Tabular Data on the Web W3C. (2015, December 15). This document describes the process of converting tabular data to create RDF subject-predicate-object triples which may be serialized in a concrete RDF syntax such as N-Triples, Turtle, RDFa, JSON-LD, or TriG.
  • Guidelines for Collecting Metadata on Linked Datasets in the datahub.io Data Catalog This page explains how data publishers describe datasets they want included in the DataHub (aka LOD Cloud), a registry of open data and content packages maintained by the Open Knowledge Foundation. The page also provides access to a validator that tests whether a data set fulfills the requirements for inclusion in the LOD Cloud.
  • Library of Congress (LC) Metadata This page provides links to the LC Linked Data Service metadata structure standards including Metadata Authority Description Schema in RDF (MADS/RDF), Simple Knowledge Organization System (SKOS), Web Ontology Language (OWL), Resource Description Framework (RDF), RDF Schema (RDFS), Dublin Core Metadata Initiative Metadata Terms, and SemWeb Vocab Status ontology. There is also an explanation of the relationship between LC authorities and vocabularies and SKOS.
  • Linked Data Platform Best Practices and Guidelines This W3C document provides best practices and guidelines for implementing Linked Data Platform [LDP] servers and clients. It also provides links to associated W3C documents.
  • PCC Task Group on URIs in MARC Year One Report Bremer, Robert, Folsom,Steven, Frank, Paul, et al. (2016, October 6). This Program for Cooperative Cataloging report discusses the issues associated with setting standards for provisioning URIs in MARC in transitioning from MARC to linked data. Some of the issues include repeatability, pairing, ambiguous relationships, the significance of the ordinal sequence, and identifying a potential field and/or indicator/subfield to record an identifier representing a work.
  • Wikipedia: Authority Control Wikipedia. This page describes the editing community's consensus with regard to authority control in Wikipedia articles. It describes how authority control is used in Wikipedia articles to link to corresponding entries in library catalogs of national libraries and other authority files all over the world. The page also provides instruction for using the Wikipedia template to add authority control identifiers to articles.

Additional Resources about Standards

  • Using the W3C Generating RDF from Tabular Data on the Web Recommendation to manage small Wikidata datasets Baskauf, Steven J. and Baskauf Jessica K. (2021, June 6). This article discusses the W3c recommendation for generating RDF from tabular data.

Metadata Application Profiles (MAPs)

A metadata application profile (MAP) is a set of recorded decisions about a shared application or metadata service, whether it is a datastore, repository, management system, discovery indexing layer, or other, for a given community. MAPs declare what types of entities will be described and how they relate to each other (the model), what controlled vocabularies are used, what fields are required and which fields have a cap on the number of times they can be used, data types for string values, and guiding text/scope notes for consistent use of fields/properties.

A MAP may be a multipart specification, with human-readable and machine-readable aspects, sometimes in a single file, sometimes in multiple files (e.g., a human-readable file that may include input rules, a machine-readable vocabulary, and a validation schema).

The function of a MAP is to clarify the expectations of the metadata being ingested, processed, managed, and exposed by an application or service and document shared community models and standards, and note where implementations may diverge from community standards.

Cornell University Library. (2018, October 23).CUL Metadata Application Profiles. Downloaded January , 2020, from

Library of Congress. (2019, April 30). PCC Task Group on Metadata Application Profiles. Downloaded July 19, 2022 from https://confluence.cornell.edu/display/mwgweb/CUL+Metadata+Application+Profiles

  • BIBCO Standard Record (BSR) RDA Metadata Application Profile Library of Congress, Program for Cooperative Cataloging (PCC). (2017, September 6). The BSR is a model for bibliographic monographic records using a single encoding level (Ldr/17=‘blank’) in a shared database environment, and it follows RDA 0.6.4 in its approach to core. The BSR establishes a baseline set of elements that emphasize access points over descriptive data, while not precluding the use of any data representing a more extensive cataloging treatment. The BSR MAP consists of a combination of RDA Core, RDA Core if, PCC Core, and PCC recommended elements applicable to archival materials, audio recordings, cartographic resources, electronic resources, graphic materials, moving images, notated music, rare materials, and textual monographs. Digital formats, digital reproductions, and authority records are also covered.
  • BIBFRAME Profiles: Introduction and Specification Library of Congress. (2014, May 5). This document describes how BIBFRAME Profiles are created, maintained and used. It describes an information model and reference serialization to support a means for identifying and describing structural constraints addressing functional requirements, domain models, guidelines on syntax and usage, and possibly data formats.
  • CONSER Standard Record (CSR) RDA Metadata Application Profile Library of Congress, Program for Cooperative Cataloging (PCC). (2020, January 21). The CSR is a model for serial descriptive records using a single encoding level (Ldr/17=‘blank’) in a shared database environment, and it follows RDA 0.6.4 in its approach to the concept of core. The CSR establishes a baseline set of elements that emphasize access points over descriptive data while not precluding the use of any data representing a more extensive cataloging treatment. The CSR consists of a combination of RDA Core, RDA Core if, PCC Core, and PCC Recommended elements applicable to textual serials in various formats. Instructions for rare serials and authority records are included.
  • CUL Metadata Application Profiles Cornell University Library Metadata Application Profiles. This page provides an overview and documentation of Cornell University Library's use of metadata application profiles (MAPs). The page offers a definition and explains the role of MAPs in an application or metadata service, and gives examples. A wealth of information regarding documentation for training, MAPS used at CUL, and the CUL metadata ecosystem is provided.
  • DLF AIG Metadata Application Profile Clearinghouse Project Digital Library Federation (DLF), Assessment Interest Group (AIG) Metadata Working Group. The mission of this project is to provide a hub and repository for collecting application profiles, mappings, and related specifications that aid or guide descriptive metadata conventions for digital repository collections to be shared with peers in the metadata community. The initial focus is on digital repository descriptive metadata documentation and specifications.
  • Digital Public Library (DPLA) Metadata Application Profile DPLA MAP Working Group. (2017, December 7). Version 5. This is the technical specification of the DPLA's Metadata Application Profile and provides a list of classes and properties used. Links to other useful documentation include an introduction to the profile, geographic and temporal guidelines, metadata quality guidelines, and rights statements guidelines.
  • Dublin Core Application Profiles (Guidelines for ) This document provides a framework for designing a Dublin Core Application Profile (DCAP), and more generally, a good blueprint for implementing a generic model for metadata records. A DCAP can use any terms that are defined on the basis of RDF, combining terms from multiple namespaces as needed.
  • Dublin Core Collection Description Application Profile Dublin Core Collection Description Task Group. (2007, March 9). This document presents full details of the Dublin Core application profile using Dublin Core properties for describing a collection, a catalogue, or an index.
  • IFLA Library Reference Model (IFLA LRM) International Federation of Library Associations and Institutions (IFLA). (2017, December). IFLA LRM is a high-level conceptual reference model developed within an enhanced entity-relationship modelling framework for bibliographic data. The model aims to make explicit general principles governing the logical structure of bibliographic information, without making presuppositions about how that data might be stored in any particular system or application. Distinctions between data traditionally stored in bibliographic or holdings records and data traditionally stored in name or subject authority records are not made.
  • PCC Task Group on Metadata Application Profiles Library of Congress, Program for Cooperative Cataloging (PCC). April 30, 2019. This page outlines the Program for Cooperative Cataloging (PCC)’s Task Group on Metadata Application Profiles charge to help PCC understand issues and practices associated with the management of MAPs and to help develop the expertise needed within PCC to work with MAPs. The charge includes defining MAPs in the PCC context, performing an environmental scan of current work in this space, determining what shareable application profiles means in the PCC context, collaborating with LDRP2 profiles groups, monitoring ongoing LDRP2 PCC Cohort discussions, and recommending actions for a plan to create and maintain profiles that meet stated use cases for application profiles.
  • BIBFLOW BIBFLOW is a two-year project of the UC Davis University Library and Zepheira, funded by IMLS. Its official title is “Reinventing Cataloging: Models for the Future of Library Operations.” BIBFLOW’s focus is on developing a roadmap for migrating essential library technical services workflows to a BIBFRAME / LOD ecosystem. This page collects the specific library workflows that BIBFLOW will test by developing systems to allow library staff to perform this work using LOD native tools and data stores. Interested stakeholders are invited to submit comments on the workflows developed and posted on this site. Information from comments will be used to adjust testing as the project progresses.
  • CODE4LIB Wiki This is the Wiki for library computer programmers and library technologists. It provides information regarding software, conferences, topics, local & regional groups, and interest groups.
  • DBPedia Blog DBpedia is an open, free, and comprehensive global knowledge base which is continuously extended and improved by putting into effect a quality-controlled and reliable fact extraction from Wikipedia and Wikidata. This blog provides information regarding DBpedia, tools, events, dataset releases, the the DBpedia ontology, and more.
  • Dublin Core Metadata Initiative Wiki This MediaWiki for the Dublin Core Metadata Initiative (DMCI) provides information on DCMI's activities regarding work on architecture and modeling, discussions and collaborative work in DCMI Communities and DCMI Task Groups, annual conferences and workshops, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices. Access the DCMI Handbook and LD4PE Linked Data Exploratorium.
  • FRBR Open Comments This blog encourages transparency and invites comments regarding the continued development of the international library entity relationship model, the Functional Requirements of Bibliographic Records (FRBR) and the FRBR-Library Reference Model (FRBR_LRM), a consolidation of the FRBR, FRAD and FRSAD conceptual models. Access an Executive Summary, and read or contribute to the General Comments or other areas of interest such as User tasks, Entities, User population considered, Entity-Relationship Diagrams, Modeling of Aggregates, and more.
  • Hanging Together: The OCLC Research Blog Hanging Together is OCLC's research blog. It provides information about the types of projects and issues which OCLC is researching and with whom it is partnering. The blog covers a wide range of topics including Architecture and Standards, Digitization, Identifiers, Infrastructure, Linked Data, Metadata, Modeling New Services, and more.
  • Schema Bib Extend Community Group This is the main Wiki page for the Schema Bib Extend Community Group, a W3C group formed to discuss and prepare proposal(s) for extending Schema.org schemas for the improved representation of bibliographic information markup and sharing. The Wiki provides links to the following topics: Recipes and Guidelines for those looking to adopt Schema.org for bibliographic data; Areas for Discussion; Use Cases; Scope; Object Types; Vocabulary Proposals; and Example Library.
  • Schema blog This is the official schema.org blog.

Below is a list of books which provide a good introduction to the Semantic Web. Items whose titles are highlighted in blue link either to the UCLA Library record for that title if the tile is held by the library, or to an online copy if available. Use the Safari Books Online link to search for additional resources.

semantic web thesis topics

This page provides a short list of datasets and data portals. To explore the global network of datasets connected on the Web, click on the Linked Open Data Cloud on the home page.

  • DataCite DataCite is a global non-profit organization that provides persistent identifiers (DOIs) for research data and other research outputs. Use it to locate, identify, and cite research data. DataCite provides several services including a global registry of research data repositories from a diverse range of academic disciplines and information about them (re3data.org), a citation formatter, content negotiation, a Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) service, and more.
  • Data.gov This page provides access to the datasets in the United States open government data catalog. Data are provided by hundreds of organizations and Federal agencies. It provides an online repository of policies, tools, case studies, and other resources to support data governance, management, exchange, and use throughout the federal government.
  • Data Hub - Linking Open Data Cloud This Data Hub group catalogs data sets that are available on the Web as Linked Data and contain data links pointing at other Linked Data sets. A search option for the datasets is available. The descriptions of the data sets in this group are used to generate the Linking Open Data Cloud diagram at regular intervals. The descriptions are also used generate the statistics provided in the State of the LOD Cloud document. The descriptions are also used generate the statistics provided in the State of the LOD Cloud document.
  • Data Portals DataPortals.org is a comprehensive list of government and NGO open data portals across the world. It is curated by a group of leading open data experts from around the world, including representatives from local, regional and national governments, international organizations such as the World Bank, and numerous NGOs.
  • DBpedia DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia provides the ability for sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data.
  • EPSG Geodetic Parameter Dataset Geodesy Subcommittee of the International Association of Oil & Gas Producers (IOGP). The EPSG Geodetic Parameter Dataset is a structured dataset of Coordinate Reference Systems and Coordinate Transformations. It can be accessed through an online registry or downloaded as zip files. Geographic coverage is worldwide, but it is does not record all possible geodetic parameters in use around the world. The dataset is maintained by the IOGP's Geomatics Committee.
  • Europeana Europeana provides access to European cultural heritage material from institutions across Europe. Discover artworks, books, music, and videos on art, newspapers, archaeology, fashion, science, sport, and much more.
  • GOKb GOKb (Global Open Knowledge base) is an an open data repository to describe electronic journals and books, publisher packages, and platforms for use in a library environment. It includes tracking changes over time, including publisher take-overs and bibliographic changes.
  • Linked Open Data Cloud lod-cloud.net. This is the home of the LOD Cloud diagram. It is a dataset of datasets published in Linkded Data format contained in the LOD Cloud. Datasets contained in the Cloud should follow the Linked Data principles listed on the site's About page. Subject areas have been broken into Subclouds for easier use.
  • List of online music databases Wikipedia. (2021, April 19). This page lists music domain datasets covering sheet music, reviews, artists, labels, a heavy metal encyclopedia, audio samples, a database of Arabic and Middle Eastern music artists, tracks, and albums, biographies and discographies, audio based music recognition and provision of song lyrics, and more.
  • Resources.data.gov This repository of Federal enterprise data resources provides links to policies, tools, case studies, and other resources to support Federal government data governance, management, exchange, and use.
  • WordNet WordNet® is a lexical database of English useful for computational linguistics and natural language processing. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets). Synsets are interlinked by means of conceptual-semantic and lexical relations. The resulting network of meaningfully related words and concepts can be navigated with the browser. The dataset is available for downloading. Unfortunately, due to staffing, updates have been suspended.

There are many resources available to help you learn about the Semantic Web and Linked Data. This page provides access to a few instructional resources on topics relating to Linked Data in a variety of formats. See the SPARQL page for SPARQL related instructional resources.

  • BIBFRAME Manual Library of Congress. (2019). This is the Library of Congress training manual for the BIBFRAME Editor and BIBFRAME Database.
  • BIBFRAME Training at the Library of Congress The Library of Congress is providing training for participants in the BibFrame Pilot which is testing bibliographic description in multiple formats and in multiple languages. This website provides access to the three training modules: 1) Introduction to the Semantic Web and Linked Data; 2) Introduction to the BibFrame Tools; and 3) Using the BibFrame Editor. There is a PowerPoint presentation and quiz for each module, and some modules have additional resources.
  • Catalogers Learning Workshop (CLW) Library of Congress. This page links to Library of Congress training materials for topics such as Library of Congress Subject Headings, RDA: Resource Description & Access; BIBFRAME training at the Library of Congress; BIBFRAME Webcasts and Presentations; and other training resources.
  • Competency Index for Linked Data (CI) LD4PE. The Competency Index for Linked Data (CI) is an initiative of Exploring Linked Data, a Linked Data for Professional Educators (LD4PE) project. The web site supports the structured discovery of learning resources for Linked Data available online by open educational resource (OER) and commercial providers. The site indexes learning resources within a framework according to specific competencies, skills, and knowledge they address. Tutorials are available for such topics as Fundamental of Resource Description Framework (RDF), Fundamentals of Linked Data, RDF Vocabularies and Application Profiles, Creating and Transforming Linked Data, Interacting with RDF data, and Creating Linked Data applications. LD4PE is administered under the jurisdiction of the DCMI Education & Outreach Committee and is funded by the Institute of Museum and Library Services (IMLS).
  • Free Your Metadata This site, geared for libraries, archives, and museums, enables the matching of metadata with controlled vocabularies connected to the Linked Data cloud and the enriching of unstructured description fields using the named entity extraction tool OpenRefine extension. Learn how to check for errors and correct them, and publish metadata in a sustainable way. The site also provides information on relevant publications.
  • The language of languages Might, Matt. This article provides a brief explanation of grammars and common notations for grammars, such as Backus-Naur Form (BNF), Extended Backus-Naur Form (EBNF) and regular extensions to BNF. Grammars determine the structure of programming languages, protocol specifications, query languages, file formats, pattern languages, memory layouts, formal languages, config files, mark-up languages, formatting languages, and meta-languages. The Extended Backus-Naur Form notation is used to describe the essential BIBFRAME Profile syntax elements.
  • Linked Data: Evolving the Web into a Global Data Space Tom Heath, Tom and Bizer, Christian. (2011). (1st edition). Synthesis Lectures on the Semantic Web: Theory and Technology, 1:1, 1-136. Morgan & Claypool. This overview of Linked Data principles and the Web of Data discusses patterns for publishing Linked Data and describes deployed Linked Data applications and their architecture. This book supersedes the publication, "How to Publish Linked Data on the Web," by Chris Bizer, Richard Cyganiak, and Tom Heath.
  • Linked Data Tools This site has been created by professional developers to help the web community transition into Web 3.0, or the Semantic Web. The site provides tools and tutorials for learning how to begin using the semantic web.
  • MarcEdit and OpenRefine Reese, Terry. (2016, January 16). This page describes how to export a MARC file for use in OpenRefine.
  • MARCEdit You Tube Videos This page lists over 90 videos produced by Terry Reese providing instructions for using MARCEdit. Topics include "MarcEdit 101: I have a MARC record, now what?," "Installing MarcEdit natively on a Mac operating system," "Extract and Edit Subsets of Records in MarcEdit," "MarcEdit Task Automation Tool," and "MarcEdit RDA Helper."
  • NCompass Live: Metadata Manipulations: Using MarcEdit and OpenRefine Nebraska Library Commission. (2015, June 24). This tutorial provides instruction for using OpenRefine and MARCEdit.
  • NCompass Live: Metadata Manipulations: Using Marc Edit And Open Refine To Enhance Technical Services Workflows Nebraska Library Commission. (2015, June 24). This video shows how to use MARCEdit and OpenRefine to edit your catalog records more efficiently, transform your library data from one format to another, and detect misspellings and other inaccuracies in your metadata.
  • Ontogenesis Lord, Phillip. (2012). This is an archived Knowledge Blog which provides access to descriptive, tutorial, and explanatory material about building, using, and maintaining ontologies, as well as the social processes and technology that support this. There are links to articles, many peer reviewed, and tutorials regarding a range of topics of interest for developers and users of ontologies.
  • Ontology Development 101: A Guide to Creating Your First Ontology Noy, Natalya F. and McGuiness, Deborah L. Stanford University. This guide discusses the reasons for developing an ontology and the methodology for creating an ontology based on declarative knowledge representation systems.
  • OpenRefine Wiki External Resources This page lists tutorials and resources developed outside the OpenRefine wiki covering a wide range of topics and use cases, including general instruction, data clean up, geospatial metadata, spreadsheet transformations, and much more.
  • Programming Historian Crymble, Adam, Fred Gibbs, Allison Hegel, Caleb McDaniel, Ian Milligan, Evan Taparata, and Jeri Wieringa, eds. (2016). The Programming Historian. 2nd ed. This blog provides peer-reviewed tutorials geared towards helping humanists learn a wide range of digital tools, techniques, and workflows to facilitate their research. Several of the tutorials are related to linked data. Other tutorials may be of interest to those generating or consuming data.
  • RDFa with schema.org codelab: overview Scott, Dan. (2014, Dec.1). This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Using detailed instructions and examples, this page walks through the process of using schema.org to enhance library web pages so that they contain structured data using the schema.org vocabulary and RDFa attributes.
  • Semantic Web Data at the University of Washington Libraries Cataloging and Metadata Services, University of Washington. This webpage links to a wide range of useful resources and guidelines for working with Linked Data in a University setting. The project was developed by the Institute of Museum and Library Services.
  • What Can We Do About Our Legacy Data? Hillmann, Diane. (2015). This is Diane Hillmann's presentation given at the 2015 American Library Association Conference raising questions about moving library data onto the Semantic Web. Posted to SlideShare on June 29, 2015.
  • XPath Tutorial This W3schools page provides an introductory tutorial for XPath, a language for finding information in an XML document.
  • Bulletin of the Association for Information Science and Technology Association for Information Science and Technology. Silver Spring, Maryland. [2013-]
  • Cataloging & Classification Quarterly Haworth Press. Binghamton, NY. (1981)- ISSN: 1544-4554. The journal covers the full spectrum of creation, content, management, use, and usability of bibliographic records, including the principles, functions, and techniques of descriptive cataloging. The range of methods of subject analysis and classification, provision of access for all formats of materials, and policies, planning, and issues connected to the effective use of bibliographic data in modern society are also focuses of this journal.
  • The Code{4}Lib Journal Code{4}Lib Journal, Chapil HIll, N.C.. (2007-). ISSN: 1940-5758The focus of this journal is to provide the library community with information regarding technology tools for managing information in libraries.
  • International Journal of Web & Semantic Technology Academy & Industry Research Collaboration Center (AIRCC). (2010 - .) ISSN: 0975-9026; EISSN: 0975-9026. This journal focuses on theory, methodology, and applications of web and semantic technology.
  • Journal of library metadata Haworth Press. New York, NY. (2008 - ). SSN : 1937-5034; ISSN : 1938-6389. The metadata that describes library resources is becoming more critical for digital resource management and discovery. This journal covers application profiles, best practices, controlled vocabularies, cross walking of metadata and interoperability, digital libraries and metadata, federated repositories and searching, folksonomies, individual metadata schemes, institutional repository metadata, metadata content standards, resource description framework, SKOS, topic maps, and more.
  • Journal of the Association for Information Science and Technology Association for Information Science and Technology. Wiley Blackwell. Hoboken, NJ. (2014). This journal publishes original research that focuses on the production, discovery, recording, storage, representation, retrieval, presentation, manipulation, dissemination, use, and evaluation of information and on the tools and techniques associated with these processes.
  • Library Technology Reports American library Association, Chicago, Ill. (2009 - ). Library Technology Reports focuses on the application of technology to library services, including evaluative descriptions of specific products or product classes and covers emerging technology. The journal is sunsetting December, 2022 and will be available for single-issue sales only.
  • Web Semantics : Science, Services and Agents on the World Wide Web Elsevier Science. Amsterdam; New York. (2004)- ISSN: 1873-7749; ISSN : 1570-8268. This journal covers all aspects of Semantic Web development including topics such as knowledge technologies, ontology, agents, databases and the semantic grid. It also focuses on disciplines such as information retrieval, language technology, human-computer interaction and knowledge discovery.

Articles and Papers

  • Addressing the Challenges with Organizational Identifiers and ISNI Smith-Yoshimura, Karen, Gatenby, Janifer, Agnew,Grace, Brown,Christopher, Byrne, Kate, Carruthers,Matt, Fletcher, Peter, Hearn, Stephen, Li, Xiaoli, Muilwijk, Marina, Naun, Chew Chiat, Riemer, John, Sadler, Roderick, Wang, Jing, Wiley, Glen, and Willey, Kayla. (2016). Dublin, Ohio: OCLC Research. This paper discusses a model for using unique identifiers that are resolvable globally over networks via a specific protocol to provide the means to find and identify an organization accurately and to define the relationships among its sub-units and with other organizations.
  • A Division of Labor: The Role of Schema.org in a Semantic Web Model of Library Resources Godby, Carol Jean. (2017). This article describes experiments with Schema.org conducted by OCLC as a foundation for a linked data model for library resources, and why Schema.org was the vocabulary considered in designing the next generation standards for library data.
  • Creating Organization Name Authority within an Electronic Resources Management System Blake, K., & Samples, J. (2009) Library Resources & Technical Services, 53(2), 94-107. To access the linked data project associated with this article, click on Organization Name Linked Data on our Use Cases Page.
  • Creating Value with Identifiers in an Open Data World Open Data Institute and Thomson Reuters. (2014) Creating Value with Identifiers in an Open Data World. Retrieved from http://thomsonreuters.com/site/data-identifiers. This joint effort between Thomson Reuters and the Open Data Institute serves as a guide for how identifiers can create value by empowering linked data for publishing and discovery.
  • The Global Open Knowledgebase (GOKb): open linked data supporting electronic resources management and scholarly communication Antelman ,Kristin and Wilson, Kristen. (2015). DOI: http://doi.org/10.1629/uksg.217. CC BY 3.0 License. Kristen Wilson Global Open Knowledgebase is an open data repository of information related to e-resources as they are acquired and managed in a library environment. This article describes how the GOKb model was developed to track this information.
  • Hello BIBFRAME2.0: Changes from 1.0 and Possible Directions for the Future Kroeger, Angela. J. (2016, October 20). Criss Library Faculty Proceedings & Presentations. 65. This presentation introduces the basics and history of the BIBFRAME model, and its relationship to RDF, FRBR, and RDA. It covers core classes, editors, mixing metadata, holdings, approaches, PREMIS, changes from BIBFRAME1.0, and more.
  • Introducing the FRBR Library Reference Model Riva, Pat, and Žumer, Maja. (2015). This paper serves as an introduction to the FRBR Library Reference Model which consolidates the FRBR, FRAD, and FRSAD models for bibliographic data, authority data, and subject authority data so that the model's definitions can be readily transferred to the IFLA FRBR namespace for use with linked open data applications.
  • Linked Data in Libraries: A Case Study of Harvesting and Sharing Bibliographic Metadata with BIBFRAME Tharani, Karim. (2015). In "Information Technology and Libraries", 34(1). This paper illustrates and evaluates the Bibliographic Framework (BIBFRAME) as a means for harvesting and sharing bibliographic metadata over the web for libraries. With BIBFRAME disparate library metadata sources such as catalogs and digital collections can be harvested and integrated over the web.
  • LTS and Linked Data: a position paper Naun,Chew Chiat , Kovari,Jason, and Folsom, Steven. (2015, Dec. 16). Prepared for Cornell University Library Technical Services (LTS), this paper explores reasons for adopting linked data techniques techniques for describing and managing library collections, and seeks to articulate a specific role for Library Technical Services within this linked data environment.
  • Making Ontology Relationships Explicit in a Ontology Network Díaz, Alicia, Motz, Regina, and Rohrer, Edelweis. (2011). This paper formally defines the different relationships among networked ontologies and shows how they can be modeled as an ontology network in a case study of the health domain.
  • RDA vocabularies for a twenty-first-century data environment Coyle, Karen. (2010). Library technology reports, v. 46, no. 2, p.5-39. Contents include Library Data in the Web World, Metadata Models of the World Wide Web, FRBR, the Domain Model, and RDA in RDF.
  • The Relationship between BIBFRAME and OCLC’s Linked-Data Model of Bibliographic Description: A Working Paper Godby, Carol Jean. (2013, June). Dublin, Ohio: OCLC Research. This paper describes a proposed alignment between BIBFRAME and an OCLC model using Schema Bib Extend extensions to enhance Schema.org for use with the description of library resources.
  • Sharing Research Data and Intellectual Property Law: A Primer Carroll. Michael W. (2015) PLoS Biol 13(8): e1002235. doi:10.1371/journal.pbio.1002235. This article explains how to work through the general intellectual property and contractual issues for all research data.
  • Towards Identity in Linked Data McCusker, James P. and McGuinness, Deborah L. Rensselaer Polytechnic Institute. This paper poses problems with and solutions for using owl:sameAs for linking datasets when dealing with provenance, context, and imperfect representations in Linked Data. The paper uses examples of merging provenance in biomedical applications.
  • Understanding Metadata Riley, Jenn. National Information Standards Organization (NISO). This primer serves as a guidance for using data and covers developments in metadata, new tools, best practices, and available resources.
  • Web-Scale Querying through Linked Data Fragments Verborgh, Ruben, Vander Sande, Miel, Colpaert, Pieter, Coppens, Sam, Mannens, Erik, Van de Walle, Rik. (2014). This paper explains the core concepts behind Linked Data Fragments, a method that allows efficient linked data query execution from servers to clients through a lightweight partitioning strategy.
  • When owl:sameAs Isn’t the Same: An Analysis of Identity in Linked Data Halpin, Harry, Hayes, Patrick J., McCusker, James P., McGuinness, Deborah L., and Thompson, Henry S. (2010). Patel-Schneider, P. F. et al. (Eds.): ISWC 2010, Part I, LNCS 6496, pp. 305–320, Springer-Verlag Berlin Heidelberg. This document discusses how owl:sameAs is being used and misused on the Web of data, particularly with regards to interactions with inference. The authors describe how referentially opaque contexts that do not allow inference exist, and outline some varieties of referentially-opaque alternatives to owl:sameAs.

This page lists Semantic Web services which are of interest to information specialists, libraries, museums, and cultural organizations.

  • Library.Link Network Library.Link Network is a service which transforms data from library resources into searchable resources on the Web using Linked Data.
  • Library of Congress Linked Data Service This is the portal for all of the Library of Congress' Linked Data Vocabularies and Authorities, including without limitation, LC Subject Headings, Name Authority File, MARC Relators, LC Classification, LC Children's Subject Headings, LC Genre/Form Terms, ISO Languages, Cultural Organizations, Content Types, to name a few.
  • Share-VDE Share-VDE (SVDE) is a discovery interface offering an intuitive delivery service of wide-ranging and detailed search results to library patrons. Library catalogues of participating institutions are converted from MARC to Resource Description Framework (RDF) using the BIBFRAME vocabulary and other ontologies to form clusters of entities. The network of resources created is published as linked data. A common knowledge base of clusters is compiled in a Cluster Knowledge Base named Sapientia. Participating libraries handle their own data as independently as possible and receive their original records converted into linked data. The SVDE infrastructure is built on the LOD Platform.
  • VIAF: The Virtual International Authority File VIAF links and matches multiple name authority files from global resources into a single OCLC-hosted name authority service increasing the utility of library authority files and making them available on the Web.
  • WorldCat Entities OCLC. (2022). This OCLC service provides the ability to search WorldCat Entities for persons and works. Browse through different languages and explore the way each entity links to other external vocabularies and authority.

Semantic Web technology uses an array of tools. This page lists conversion tools, data management tools, glossaries, ontology & vocabulary building platforms, Semantic Web browsers, validators, XML editors, and XPath tools.

  • W3C Semantic Web Tools This Wiki lists an array of tools for developing Semantic Web applications compiled by the W3C, including development environments, editors, libraries or modules for various programming languages, specialized browsers, and more.

Assessment Tools

  • DLF AIG MWG Metadata Assessment Toolkit The Digital Library Federation (DLF) Assessment Interest Group (AIG) Metadata Working Group (MWG) aka DLF Metadata Assessment Working Group. The toolkit is a great resource for assessment information and tools and covers a review of the literature, tools, and organizations concerning metadata assessment, quality, and best practices. The site provides a list of metadata assessment tools, and a collection of application profiles, mappings, code and best practices provided by several institutions.
  • LODQuator LODQuator is a data portal built on the Luzzu Quality Assessment Framework for ranking and filtering Linked Open Data Cloud datasets. It provides the ability to search datasets based on their quality using over a dozen metrics which are listed on the site.
  • Luzzu Enterprise Information Systems (EIS) at Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS), University of Bonn. Luzzu is a quality assessment framework for Linked Data Open datasets based on the Dataset Quality Ontology (daQ). It assesses Linked Data quality using user-provided domain specific quality metrics in a scalable manner, provides query enabled quality metadata on assessed datasets, and assembles detailed quality reports on assessed datasets.

Authority Tools

  • Authority toolkit: create and modify authority records Strawn, Gary L. (2016, June 30). Northwestern University. Evanston, IL USA. This document describes how the Authority Toolkit can be used to create a new authority record from an access field in a bibliographic record. Use the tool to help you enhance the preliminary authority record, enhance an existing authority record, or extract one identity from an undifferentiated personal name authority record and then enhance the preliminary authority record for the extracted identity. The tool can be used to extract information from sources such as VIAF, Wikidata, Wikipedia, and the CERL thesaurus into authority records.

BIBFRAME Tools

  • BIBFRAME Comparison Tool This tool provides for the side-by-side conversion of MARCXML records from the Library of Congress database to BIBFRAME2 using a LCCN or record number. Records can be serialized in Turtle or RDF XML.
  • Bibliographic Framework Initiative Library of Congress. The Bibliographic Framework Initiative is the replacement for MARC developed by the Library of Congress and is investigating all aspects of bibliographic description, data creation, and data exchange. More broadly the initiative includes accommodating different content models and cataloging rules, exploring new methods of data entry, and evaluating current exchange protocols.This page provides access to the BIBFRAME 2.0 model, vocabulary, extension list view, and MARC 21 to BIBFRAME conversion tools. The BIBFRAME Implementation Register can be accessed here.
  • marc2bibframe2 This tool, available on GitHub, uses an XSLT 1.0 application to covert MARCXML to RDF/XML, using the BIBFRAME 2.0 and MADSRDF ontologies. Information regarding integration of the application with Metaproxy is also available.
  • MARC 21 to BIBFRAME 2.0 Conversion Specifications These specifications were developed to support a pilot in the use of BIBFRAME 2.0 at the Library of Congress. They specify the conversion of MARC Bibliographic records to BIBFRAME Work, Instance and Item descriptions, and MARC Authority records for titles and name/titles to BIBFRAME Work descriptions. The specifications were written from rom the perspective of MARC so that each element in MARC would at least be considered, even if not converted. The specifications are presented in MS Excel files with explanatory specifications in MS Word.
  • Sinopia Sinopia is an implementation of the Library of Congress BIBFRAME Editor and Profile Editor.

Conversion Tools

  • Freeformatter JSON to XML Converter This tool converts a JSON file into an XML file. The converter uses rules to make allowances for XML using different item types that do not have an equivalent JSON representation.
  • Freeformatter XML to JSON Converter This tool converts an XML file into a JSON file. The converter uses rules to make allowances for XML using different item types that do not have an equivalent JSON representation.
  • OxGarage OxGarage is a web, RESTful conversion service developed by the University of Oxford IT Services. The majority of transformations use the Text Encoding Initiative (TEI) format as a pivot format, and many other formats are supported, including TEI to Word and Word to TEI. Give the page a moment to load. Choose a format from a menu of Documents, Presentations, or Spreadsheets to convert to a format from a list provided for each menu option.
  • Pandoc Pandoc converts documents in markdown, reStructuredText, textile, HTML, DocBook, LaTeX, MediaWiki markup, TWiki markup, OPML, Emacs Org-Mode, Txt2Tags, Microsoft Word docx, LibreOffice ODT, EPUB, or Haddock markup to HTML formats, word processor formats, Ebooks, documentation formats, page layout formats, outline formats, TeX formats, PDF, lightweight markup formats, and custom formats.
  • SearchFAST OCLC Research. SearchFast is a suite of tools for working with FAST headings. The tools include a converter to convert Library of Congress Subject Headings to FAST headings, searchFast, a search interface for the FAST database, and mapFast, a Google Maps mashup to provide map based access to bibliographic records using FAST geographic and event authorities. Other tools in the suite include FAST Linked Data, authorities formatted using schema.org and SKOS (Simple Knowledge Organization System) that are linked to LCSH and other authorities such as VIAF, Wikipedia, and GeoNames, and assignFast, a web service that automates manual selection of FAST subjects.

Data Management Tools

  • CKAN CKAN (Comprehensive Knowledge Archive Network) is open-source data portal platform aimed at data publishers such as national and regional governments (including the U. S. government), companies and organizations wanting to make their data open and available. CKANs harvesting framework can be used to retrieve, normalize, and convert dataset metadata from multiple catalogs. It provides a catalog system, integration with third-party content management systems like Drupal and WordPress, data visualization and analytics, integrated data storage and full data API, and more. CKAN is maintained by the Open Knowledge Foundation which provides support and hosting.
  • DataHub DataHub is a free data management platform from the Open Knowledge Foundation. It can be used to publish or register datasets as well as create and manage groups and communities. It is based on the CKAN data management system.
  • The Dataverse Project A repository for research data that supports the sharing of open data and enables reproducible research.
  • eXistdb eXistdb is a NoSQL XML and non-documents database which uses the XML Query Language (XQuery) for coding and indexing. It can work alongside oXygen. Users of eXistdb include the Office of the Historian, United States Department of State and the University of Victoria Humanities Computing and Media Centre.
  • Fedora Fedora (Flexible Extensible Digital Object Repository Architecture) is a modular, open source repository platform for the management and dissemination of digital content, including curating research data throughout the research life cycle from beginning through preservation in a RDF environment. Fedora is being used for digital collections, e-research, digital libraries, archives, digital preservation, institutional repositories, open access publishing, document management, digital asset management, and more.
  • Jupyter Jupyter is an open-source web application for creating and sharing documents containing live code, equations, visualizations and narrative text. It can be used for data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more. Jupyter supports over 40 programming languages, including Python, R, Julia, and Scala.
  • KriKri KriKri is a Ruby on Rails open source engine for metadata aggregation, enhancement, and quality control developed by the Digital Library of America (DPLA) released under the MIT License. It works with Heiðrún, DPLA's metadata ingestion system. Features include: harvesting metadata from OAI-PMH providers; creating RDF metadata models, with specific support for the DPLA Metadata Application Profile; enrichments for mapped metadata, including date parsing and normalization, stripping and splitting on punctuation; parsing metadata and mapping to RDF graphs using a Domain Specific Language; and more.
  • OpenRefine OpenRefine (formerly Google Refine) is a tool for working with data. Use it to clean data, transform data from one format into another, extend data with web services, and link it to databases such as Wikidata.
  • Samvera Samvera (previously, Hydra), is an open source digital asset management framework. The system uses Ruby gem building blocks allowing for customization. Samvera instances can be cloned and adapted to local needs. Bundled solutions requiring fewer local resources or cloud-based, hosted versions include Avalon, Hyrax, and Hyku.
  • Wikibase Wikibase was developed for Wikidata as an open source collection of applications and libraries for creating and sharing structured data as linked data entities and their relationships. It consists of a set of extensions to the MediaWiki software for storing and managing data (Wikibase Repository) and for embedding data on other wikis (Wikibase Client). Wikibase provides an editing interface for creating, updating, merging, and deleting item and property entities.

Discovery Interfaces

  • Blacklight Blacklight is in open source, discovery interface platform framework for searching an Apache Solr index. Blacklight MARC provides library catalog enhancements, Spotlight enables the creation of feature rich websites for digital collections, and Geoblacklight provides for the discovery and sharing of geospatial data. Search box, facet constraints, stable document urls, and more are customizable via Rails templating mechanisms. It accommodates heterogeneous data, allowing different information displays for different types of objects.
  • Geany Geany is an open source text editor using the GTK+ toolkit with basic features of an integrated development environment (IDE). It supports many filetypes including C, Java, Java Script, PHP, HTML, CSS, Python, Perl, Pascal, Ruby, XML, SQL, and more. Features include syntax highlighting, code folding, symbol name auto-completion, auto-closing of XML and HTML tags, code navigation, build system to compile and execute your code, symbol lists, and a plug-in interface. Geany runs on every platform which is supported by the GTK libraries including Linux, FreeBSD, NetBSD, OpenBSD, MacOS X, AIX v5.3, Solaris Express and Windows. Only the Windows port of Geany is missing some features.
  • LIME Palmirani, Monica, Vitali, Fabio, and Cervone, Luca, et al. LIME is an open source, customizable web based editor for converting non-structured legal documents into XML. Currently, there are demo versions of LIME for three schema languages: AkomaNtoso; TEI; and LegalRuleML. LIME provides a linked outline view of the document and a contextual markup menu showing available elements. Click on the Demo tab at the top of the web site to choose a schema. LIME is under development at CIRSFID and the University of Bologna.
  • MarcEdit MarcEdit is a free Marc editing tool. Use the tool to download a MARC record and transform it into an RDF/XML serialization of the record. The tool also can be used to perform MARC database maintenance. MarcEdit includes a tool for querying registered xslt crosswalks and downloading them for use with MarcEdit.
  • Notepad ++ Notepad ++ is a free source code editor that runs in the MS Windows environment.
  • oXygen oXygen is a licensed cross platform XML editor that works with all XML-based technologies including XML databases, XProc pipelines, and web services. oXygen XML Author comes with a configurable and extensible visual editing mode based on W3C CSS stylesheets with ready-to-use DITA, DocBook, TEI, XHTML, XSLT, and XQuery support.
  • pymarc Python Software Foundation. (2019). pymarc is a python library for working with bibliographic data encoded in MARC21. It provides an API for reading, creating, and modifying MARC records.
  • RDFa Play RDFa Play is a real-time RDFa 1.1 editor, data visualizer and debugger. Paste your HTML+RDFa code into the editor to view a preview page, a data visualization, and the raw data of your code.
  • Dublin Core Generator This site provides three tools developed by Nick Steffel to generate Dublin Core code. The Simple Generator generates simple Dublin Core metadata using only the 15 main elements. Advanced Dublin Core metadata code using the more detailed qualified elements and encoding schemes can be generated using the Advanced Generator, and there is a generator for the xZINECOREx variation of Dublin Core.
  • Glossary of Metadata Standards This glossary lists the most common metadata standards used in the cultural heritage community. Several of them are listed on our Vocabularies page, which you can access by clicking on Vocabularies, etc. in the menu on the left. A color version of the Seeing Standards poster is also shown on that page. A poster version of the glossary is also available.
  • Glossary of Terms Relating to Thesauri and Other Forms of Structured Vocabulary Will, Leonard D. and Will, Sheena. (2013). This is an alphabetical list of terms associated with thesauri and structured vocabularies.
  • Linked Data Glossary This is the W3C's glossary of Linked Data terms.

Ontology/Vocabulary Building Platforms and Tools

  • Fluent Editor Fluent Editor is a tool for editing and manipulating complex ontologies that use Controlled Natural Language. A main feature is the usage of Controlled English as a knowledge modeling language. it prohibits one from entering any sentence that is grammatically or morphologically incorrect and actively helps the user during sentence writing. It is free for individual or academic use. Access to updates and information is given with registration.
  • Neologism Neologism is an open source vocabulary publishing platform for creating and publishing vocabularies compatible with Linked Data principles. It supports the RDFS standard enabling you to create RDF classes and properties. It also supports a part of OWL. Neologism is written in PHP and built on the Drupal platform.
  • NeOn NeOn is an open source multi-platform for the support of the ontology engineering life-cycle. The toolkit is based on the Eclipse platform and provides an extensive set of plug-ins covering a variety of ontology engineering activities, including Annotation and Documentation, Development, Human-Ontology Interaction, Knowledge Acquisition, Management, Modularization and Customization, Neon Plugins, Old Main Page, Ontology Dynamics, Ontology Evaluation, Ontology Matching, Reasoning and Inference, and Reuse. NeOn’s aim is to advance the state of the art in using ontologies for large-scale semantic applications in distributed organisations by improving the ability to handle multiple networked ontologies that exist in a particular context, are created collaboratively, and might be highly dynamic and constantly evolving.
  • OOPS! (OntOlogy Pitfall Scanner!) OOPS! is an application used to detect common pitfalls when developing ontologies. Enter the URI or the RDF code of the ontology. Once the ontology is analyzed, a results list of pitfalls appear that can be expanded to display information regarding the pitfalls.
  • Protégé Protégé is a free, open­source platform with a suite of tools to construct domain models and knowledge ­based applications with ontologies. Protégé Desktop is a feature rich ontology editing environment with full support for the OWL 2 Web Ontology Language,and direct in-memory connections to description logic reasoners like HermiT and Pellet. Protégé Desktop supports creation and editing of one or more ontologies in a single workspace via a completely customizable user interface. Visualization tools allow for interactive navigation of ontology relationships. It is W3C standards compliant and offers ontology refactoring support, direct interface to reasoners like HermiT and Pellet and is cross compatible with WebProtégé. Protégé provides an environment to create, upload, modify, and share ontologies for collaborative viewing and editing. Protégé was developed by the Stanford Center for Biomedical Inforamtics Research at the Stanford University School of Medicine. Download the desktop version or use the Web version from this site.
  • VOWL: Visual Notation for OWL Ontologies This page provides access to three tools for visualizing ontologies: WebVOWL; QueryVOWL; and the Protégé plug-in, ProtégéVOWL. A link to the VOWL (Visual Notation for OWL Ontologies) specification and a Language Reference for QueryVOWL (Visual Query Language) for Linked Data is also provided.

Query Tools, Search Engines & Browser Add-ons

  • Linked Data Fragments Use this tool to execute queries against live Linked Data on the Web in your browser. The tool supports federated querying.
  • OpenLink Data Explorer Extension OpenLink Software. This web browser extension provides options for viewing Data Sources associated with Web Pages to explore the raw data and entity relationships that underlay the Web resources it processes. The extension enables Hypertext and Hyperdata traversal of Web data. The browser add-on is easy to install. It was first developed for use on most browsers, but with some browser updates, the add-on doesn't work. Try using it with Chrome. The browser provides filters for faceted searching and visualization options.
  • OpenLink Structured Data Sniffer (OSDS) OpenLink Software. OpenLink Structured Data Sniffer is a browser extension for Google Chrome, Microsoft Edge, Mozilla Firefox, Opera, and Vivaldi that reveals structured metadata embedded in HTML pages in notations including POSH (Plain Old Semantic HTML), Microdata, JSON-LD, RDF-Turtle, and RDFa. Buttons assist in navigating the Web, and it provides the ability to save extracted metadata or new annotations to the cloud or local storage.
  • Metaproxy Index Data. Metaproxy is a proxy Z39.50/SRW/SRU front end server designed for integrating multiple back end databases into a single searchable resource. It also works in conjunction with Index Data’s library of gateways to access non-standard database servers. Index Data works with libraries, consortia, publishers, aggregators, technology vendors, and developers.
  • Ontobee He Group. University of Michigan. Ontobee is a Linked Data server designed to facilitate ontology sharing, visualization, query, integration, and analysis. It dereferences term URIs to HTML web pages for user-friendly browsing and navigation and to RDF source code for Semantic Web applications.

Triple Store Tools

  • Blazegraph Blazegraph is a scalable, high-performance graph database with support for Blueprints and RDF/SPARQL APIs. It supports up to 50 billion edges on a single machine. Blazegraph works in a Python environment. Wikimedia uses it to power their wikidata query service.
  • Gruff Gruff is a free, downloadable graphical triple-store browser with a variety of tools for laying out cyclical graphs, displaying tables of properties, managing queries, and building queries as visual diagrams. Use gruff to display visual graphs of subsets of a store’s resources and their links and build a visual graph that displays a variety of the relationships in a triple-store. Gruff can also display tables of all properties of selected resources or generate tables with SPARQL queries, and resources in the tables can be added to the visual graph.
  • Freeformatter JSON Validator This tool validates a JSON string against RFC 4627 (the application/json media type for JavaScript Object Notation) and against the JavaScript language specification. Configure the validator to be lenient or strict.
  • Link Checker W3C. (2019). Use this validator to check issues with links, anchors and referenced objects in Web pages, CSS style sheets, or whole Web sites. Best results are achieved when the documents checked use Valid (X)HTML Markup and CSS.
  • RDF Validation Service Use this tool to parse RDF/XML documents. A 3-tuple (triple) representation of the corresponding data model as well as an optional graphical visualization of the data model will be displayed.
  • Structured Data Linter The Structured Data Linter was initiated by Stéphane Corlosquet and Gregg Kellogg. It is a tool to verify structured data present in HTML pages. The Linter provides snippet visualizations for schema.org and performs limited vocabulary validations for schema.org, Dublin Core Metadata Terms, Friend of a Friend (FOAF), GoodRelations, Facebook's Open Graph Protocol, Semantically-Interlinked Online Communities (SIOC), Facebook's Open Graph Protocol, Simple Knowledge Organization System (SKOS), and Data-Vocabulary.org.
  • Toolz Online XML Validator Insert a fragment of an XML document into this tool to validate it.
  • Yandex Yandex is a structured data Microformat validator for checking semantic markup. Check all the most common microformats: microdata, schema.org, microformats, OpenGraph and RDF by cutting and pasting the source code into the validator.

Visualization Tools

  • D3 Data-Driven Documents is a JavaScript library for manipulating documents based on data using HTML, SVG and CSS. Using D3, data can be displayed in a vast array of visualization formats including, but not limited to Box Plots, Bubble Charts, Bullet Charts, Calendar Views, Chord Diagrams, Dendograms, Force-Directed Graphs, Chord Diagrams, Circle Packings, Population Pyramids, Steamgraphs, Sunbursts, Node-link Trees, Treemaps, Voronoi Diagrams, Collision Detections, Hierarchical Edge Bundlings, Word Cloud, and more.
  • Visual Data Web The Visual Data Web provides links to visualization tools compatible with RDF and Linked Data on the Semantic Web, especially for average Web users with little to no knowledge about the underlying technologies. The site provides information regarding developments, related publications, and current activities to generate new ideas, methods, and tools to make the Data Web more accessible and visible.

XPath Tools

  • eagle-i The eagle-i Software and ontology consists of six web applications: eagle-i Central Search and iPS Cell Search — for resource discovery and exploration; Institutional search — for a single repository search UI; Ontology Browser — for viewing the eagle-i ontology without any additional applications; SWEET (Semantic Web Entry & Editing Tool) — for manually entering and managing data in an eagle-i repository; RDF repository — for storing resource and provenance metadata as RDF triples; and SPARQLer — a SPARQL query entry point and workbench to query an eagle-i repository. These applications are served by the ETL (extract, transform, and load) toolkit — for batch entry of information to an eagle-i repository in an ontology-compliant manner and the Data management toolkit — for bulk data maintenance and migration. The open source software development platform offers integrated tools for JIRA bug tracking, Confluence Wiki, Bamboo continuous builds, Nexus download repository, project mailing lists, repository monitoring, and more.
  • Freeformatter XPath Tester/Evaluator Use this tool to test XPath expressions/queries against an XML file. It supports most of the XPath functions (string(), number(), name(), string-length() etc.) and is not limit to working against nodes.
  • Toolz XPath Tester/Evaluator Use this tool to run an XPATH statement against an XML fragment
  • W3C XPath evalutation online Use this W3C tool yo check a XPath expression against XML.

Miscellaneous

  • Keyword Planner The Google AdWords Keyword Planner is not a semantic web tool. While geared towards advertising, it can be a useful took to discover similar keywords for a topic. It is a free tool, but you will have to create an account.
  • prefix.cc Enter a namespace prefix in this tool to find the full namespace for the prefix. The service also provides a reverse lookup option which finds a prefix for a given namespace URI.

Instructional Resources for Semantic Tools

semantic web thesis topics

  • WebLearn This blog provides examples of using OpenRefine to clean MARC data. Stephen shares his experience working with MARC data while developing the Sir Louie Project, a project to improve the searching of library catalogues and the displaying availability information with a reading list on behalf of the British Library.

SPARQL serves as the search engine for RDF. It is a set of specifications recommended by W3C Recommendation that provide languages and protocols to query and manipulate RDF graph content on the Web or in an RDF triple store.

SPARQL Documentation

  • SPARQL 1.1 Entailment Regimes Glimm, Birte, and Ogbuji, Chimezie, editors. (2013, March21). This document defines entailment regimes and specifies how they can be used to redefine the evaluation of basic graph patterns from a SPARQL query making use of SPARQL's extension point for basic graph pattern matching. Entailment regimes specify conditions that limit the number of entailments that contribute solutions for a basic graph pattern.
  • SPARQL 1.1 Federated Query Seaborne, Andy, Polleres, Axel, Feigenbaum, Lee, and Williams, Gregory Todd. (2013, March 21). The SPARQL Federated Query extension is a specification which defines the syntax and semantics for using the SERVICE keyword to execute queries that merge data distributed over different SPARQL endpoints. It provides for the ability to direct a portion of a query to a particular SPARQL endpoint. Results are returned to the federated query processor and are combined with results from the rest of the query.
  • SPARQL 1.1 Graph Store HTTP Protocol Ogbuji, Chimezie, editor. (2013, March 21). This document describes the use of HTTP for managing a collection of RDF graphs as an alternative to the SPARQL 1.1 Update protocol interface. For some clients or servers, HTTP may be easier to implement or work with, and this specification serves as a non-normative suggestion for HTTP operations on RDF graphs which are managed outside of a SPARQL 1.1 graph store.
  • SPARQL 1.1 Overview W3C SPARQL Working Group. (2013, March 21).This document provides an introduction to a set of W3C specifications for querying and manipulating RDF graph content on the Web or in an RDF store. It gives a brief description of the eleven specifications that comprise SPARQL.
  • SPARQL 1.1 Protocol Feigenbaum, Lee, Williams, Gregory Todd, Clark, Kendall Grant, Torres, Elias. (2013, March 21. The SPARQL 1.1 Protocol describes a means for conveying SPARQL queries and updates to a SPARQL processing service and returning the results via HTTP to the entity that requested them. It has been designed for compatibility with the SPARQL 1.1 Query Language [SPARQL] and with the SPARQL 1.1 Update Language for RDF. The intended use of this document is primarily intended for software developers implementing SPARQL query and update services and clients.
  • SPARQL 1.1 Query Results CSV and TSV Formats Seaborne, Andy. (2013, March 21). This document describes the use of Comma Separated Values (CSV) and tab separated values (TSV ) for expressing SPARQL query results from SELECT queries. CSV and TSV are formats for the transmission of tabular data, particularly spreadsheets.
  • SPARQL 1.1 Query Results JSON Format Clark, Kendall Grant, Feigenbaum, Lee, Torres,Elias. (2013, March 21). SPARQL is a set of standards which defines several Query Result Forms used to query and update RDF data, along with ways to access such data over the web. This document defines the representation of SELECT and ASK query results using JSON.
  • SPARQL Query Results XML Format (Second Edition) Beckett, Dave, and Broekstra, Jeen. (2013, March 21). SPARQL is a set of standards which defines several Query Result Forms used to query and update RDF data, along with ways to access such data over the web. This document defines the SPARQL Results Document that encodes variable binding query results from SELECT queries and boolean query results from ASK queries in XML.
  • SPARQL 1.1 Query Language Harris, Steve, Seaborne, Andy. (2013, March 21). This document defines the syntax and semantics of the SPARQL query language for RDF, a directed, labeled graph data format for representing information in the Web. SPARQL is used to express queries across data sources, whether the data is stored natively as RDF or viewed as RDF via middleware. SPARQL supports querying required and optional graph patterns along with their conjunctions and disjunctions, aggregation, subqueries, negation, creating values by expressions, extensible value testing, and constraining queries by source RDF graph. Results of SPARQL queries can be result sets or RDF graphs.
  • SPARQL 1.1 Service Description Williams, Gregory Todd. (2013, March 21). A SPARQL service description lists the features of a SPARQL service made available via the SPARQL Protocol for RDF. This document describes how to discover a service description from a specific SPARQL service and an RDF schema for encoding such descriptions in RDF.
  • SPARQL 1.1 Update Gearon, Paula, Passant, Alexandre, and Polleres, Axel. (2013, March 21). SPARQL 1.1 Update is a language used to update RDF graphs using a syntax derived from the SPARQL Query Language for RDF. Operations are provided to update, create, and remove RDF graphs in a Graph Store.

GeoSpatial SPARQL

In addition to the W3c SPARQL documents, there is documentation for a Geospatial SPARQL query language.

  • OGC GeoSPARQL - A Geographic Query Language for RDF Data Matthew Perry, Matthew, and Herring, John, editors. (2012, September 10). Open Geospatial Consortium (OGC). This OGC standard defines a vocabulary for representing geospatial data in RDF. It also defines an extension to the SPARQL query language for processing geospatial data. The GeoSPARQL query language is designed to accommodate systems based on qualitative spatial reasoning and systems based on quantitative spatial computations.

SPARQL Endpoints

This box provides links to some SPARQL endpoints that are useful for researchers, and are good examples of datasets to practice using SPARQL queries. The Europeana dataset is used in the SPARQL for humanists tutorial on the left.

  • Europeana SPARQL API Use this API to explore connections between Europeana data and outside data sources, like VIAF, Iconclass, Getty Vocabularies (AAT), Geonames, Wikidata, and DBPedia.

SPARQL Tools

This box contains SPARQL tools.

  • Apache Jena Fuseki2 Apache Jena Fuseki is a SPARQL server. It has the capability to run as a operating system service, as a Java web application (WAR file), and as a standalone server. It provides SPARQL 1.1 protocols for query, update and the SPARQL Graph Store. Fuseki can be configured with TDB to provide a transactional persistent storage layer, and incorporates Jena text query and Jena spatial query.
  • Pubby Bizer, Christian, and Cyganiak, Rhichard. Freie Universität Berlin. Pubby adds Linked Data interfaces to SPARQL endpoints. It can turn a SPARQL endpoint into a Linked Data server, and is implemented as a Java web application. Features include providing dereferenceable URIs by rewriting URIs found in the SPARQL-exposed dataset into the Pubby server's namespace, providing an HTML interface showing the data available about each resource, handling 303 redirects and content negotiation, and provides for the addition of metadata. It is compatible with Tomcat and Jetty servlet containers.

SPARQL Instructional Resources

  • SPARQL Sample Queries Coombs, Karen. This page on the github blog, Library Web Chic, provides useful examples of SPARQL queries. This is an excellent place to browse through when learning how to query with SPARQL. Examples include simple queries for finding subjects, predicates, and objects and build into more complex federated and filtered queries across datasets. This serves as a companion to Karen Coombs' Querying Linked Data webinar.
  • SPARQL for humanists Lincoln, Matthew. (2014, July 10). From The Programming Historian. This blog entry describes using SPARQL using the Europeana Data Model (EDM). It provides a good introduction to SPARQL. more... less... For a more advanced lesson in learning SPARQL, see Matthew Lincoln, Using SPARQL to access Linked Open Data, from The Programming Historian.
  • Using SPARQL to access Linked Open Data Lincoln, Matthew. (2015, November 24). From The Programming Historian. This blog entry provides a lesson explaining why cultural institutions are moving to graph databases. The entry also gives a detailed lesson in using SPARQL to access data in cultural institution databases.

Vocabularies, Ontologies and Frameworks

Vocabularies, ontologies & frameworks.

Controlled vocabularies, ontologies, schemas, thesauri, and syntaxes are building blocks used by Resource Description Framework (RDF) to structure data semantically, identify resources, and to show the relationships between resources in Linked Data. Libraries and cultural institutions belong to one of the many knowledge organization domains making use of controlled authorities. These pages focus especially on the vocabularies and computer languages that are used in the library and cultural heritage institutions data landscape.

Seeing Standards: Visualization of the Metadata Universe

semantic web thesis topics

About Seeing Standards

Becker, Devin and Jenn L. Riley. (2010). Seeing Standards: A Visualization of the Metadata Universe . Click on the chart to access a PDF version and a Glossary of Metadata Standards.

About Vocabularies

  • About Taxonomies & Controlled Vocabularies American Society for Indexing, Taxonomies & Controlled Vocabularies Special Interest Group. This page describes the differences between controlled vocabularies, taxonomies, thesauri, and ontologies.

Ontologies & Frameworks

International Image Interoperability Framework (IIIF)

IIIF is a framework for image delivery developed by a community of leading research libraries and image repositories. The goals are to provide access to an unprecedented level of uniform and rich access to image-based resources hosted around the world, define a set of common application programming interfaces supporting interoperability between image repositories, develop, cultivate and document shared technologies, such as image servers and web clients, for providing viewing, comparing, manipulating, and annotating images.

The two core APIs for the Framework are:

  • IIIF Image API 3.0

IIIF Consortium. (2021). Appleby, Michael, Crane, Tom, Sanderson, Robert, Stroop, Jon, and Warner, Simeon. This document describes an image delivery API specification for a web service that returns an image in response to a standard HTTP or HTTPS request. The URI can specify the region, size, rotation, quality characteristics and format of the requested image as well as be enabled to request basic technical information about the image to support client applications.

  • IIIF Presentation API 3.0.

IIIF Consortium. (2021). Appleby, Michael, Crane, Tom, Sanderson, Robert, Stroop, Jon, and Warner, Simeon. The IIIF Presentation API provides information necessary to human users to allow a rich, online viewing environment for compound digital objects. It enables the display of digitized images, video, audio, and other content types associated with a particular physical or born-digital object, allows navigation between multiple views or time extents of the object, either sequentially or hierarchically, displays descriptive information about the object, view or navigation structure, and provides a shared environment in which publishers and users can annotate the object and its content with additional information.

  • Presentation Cookbook of IIIF Recipes

The Cookbook provides resource types and properties of the Presentation specification and for rendering by viewers and other software clients. Examples are provided to encourage publishers to adopt common patterns in modeling classes of complex objects, enable client software developers to support these patterns, for consistency of user experience, and demonstrate the applicability of IIIF to a broad range of use cases.

Additional APIs for the Framework are:

  • IIIF Authentification API 1.0

IIIF Consort ium. (2021). Appleby, Michael, Crane, Tom, Sanderson, Robert, Stroop, Jon, and Warner, Simeon.The Authentication specification describes a set of workflows for guiding the user through an existing access control system. It provides a link to a user interface for logging in, and services that provide credentials, modeled after elements of the OAuth2 workflow acting as a bridge to the access control system in use on the server, without the client requiring knowledge of that system.

  • IIIF Content Search API 1.0

IIIF Consort ium. (2021). Appleby, Michael, Crane, Tom, Sanderson, Robert, Stroop, Jon, and Warner, Simeon. The Content Search specification lays out the interoperability mechanism for performing searches among varied content types from different sources. The scope of the specification is searching annotation content within a single IIIF resource, such as a Manifest, Range or Collection.

Linked Art is a data model which provides an application profile used to describe cultural heritage resources, with a focus on artworks and museum-oriented activities. Based on real world data and use cases, it defines common patterns and terms used in its conceptual model, ontologies, and vocabulary. Linked Art follows existing standards and best practices including CIDOC-CRM, Getty Vocabularies, and JSON-LD 1.1 as the core serialization format.

Ontologies are formalized vocabularies of terms, often covering a specific domain. They specify the definitions of terms by describing their relationships with other terms in the ontology. OWL 2 is the Web Ontology Language designed to facilitate ontology development and sharing via the Web. It provides classes, properties, individuals, and data values that are stored as Semantic Web documents. As an RDF vocabulary, OWL can be used in combination with RDF schema.

VOWL : Visual Notation for OWL Ontologies

Negru,Stefan, Lohmann, Seffan, and Haag, Florian. (2014, April 7). Specification of Version 2.0. VOWL defines a visual language for user-oriented representation of ontologies. The language provides graphical depictions for elements of OWL that are combined to a force-directed graph layout visualizing the ontology. It focuses on the visualization of the classes, properties and datatypes, sometimes called TBox, while it also includes recommendations on how to depict individuals and data values, the ABox. Familiarity with OWL and other Semantic Web technologies is required to understand this specification.

  • OWL 2 Web Ontology Language Document Overview (Second Edition) This is the W3C introduction to OWL 2 and the various other OWL 2 documents. The document describes the syntaxes for OWL 2, the different kinds of semantics, the available sub-languages, and the relationship between OWL 1 and OWL 2. Read this document before reading other OWL 2 documents.
  • OWL 2 Web Ontology Language Structural Specification and Functional-Style Syntax (Second Edition) This document defines the OWL 2 language. The core part, the structural specification, describes the conceptual structure of OWL 2 ontologies and provides a normative abstract representation for all OWL 2 syntaxes. The document also defines the functional-style syntax, which follows the structural specification and allows OWL 2 ontologies to be written in a compact form. This syntax is used in the definitions of the semantics of OWL 2 ontologies, the mappings from and into the RDF/XML exchange syntax, and the different OWL 2 profiles.
  • OWL 2 Web Ontology Language Mapping to RDF Graphs (Second Edition) This document defines two mappings between the structural specification of OWL 2 and RDF graphs. The mappings can be used to transform an OWL 2 ontology into an RDF graph and an RDF graph into an OWL 2 ontology.
  • Time Ontology in Owl Cox, Simon, Little, Chris, Hobbs, Jerry R., and Pan, Feng. (2017, October 19). W3C. Time Ontology in Owl (OWL-Time) can be used to describe temporal relationships. It focuses particularly on temporal ordering relationships. Elements of a date and time are put into separately addressable resources. OWL-Time supports temporal coordinates (scaled position on a continuous temporal axis) and ordinal times (named positions or periods) and does not necessarily expect the use of the Gregorian calendar.
  • PRESSoo Le Boeuf, Patrick (2016, January). PRESSoo is an ontology designed to represent bibliographic information relating to serials and continuing resources. PRESSoo is an extension of the Functional Requirements for Bibliographic Records – Object Oriented model (FRBRoo). PRESSoo has been developed by representatives of the ISSN International Centre, the ISSN Review Group, and the Bibliothèque nationale de France (BnF).

The Resource Description Framework (RDF) is a framework for representing information in the Web of Data. It comprises a suite of standards and specifications whose documentation is listed below.

  • Cool URIs for the Semantic Web Leo Sauermann, Leo and Cyganiak, Richard. (2008, Dec.3). W3C. Uniform Resource Identifiers (URIs) are at the core of RDF providing the link between RDF and the Web. This document presents guidelines for their effective use. It discusses two strategies, called 303 URIs and hash URIs. It gives pointers to several Web sites that use these solutions, and briefly discusses why several other proposals have problems.
  • RDF 1.1 Concepts and Abstract Syntax This W3C document defines an abstract syntax (a data model) for linking RDF-based languages and specifications. The syntax has a data structure for expressing descriptions of resources as RDF graphs made of sets of subject-predicate-object triples, where the elements may be IRIs, blank nodes, or datatyped literals. The document introduces key concepts and terminology, and discusses datatyping and the handling of fragment identifiers in IRIs within RDF graphs.
  • RDF 1.1 Primer The Primer introduces basic RDF concepts and shows concrete examples of the use of RDF. It is designed to provide the basic knowledge required to effectively use RDF.
  • RDF Schema 1.1 The RDF Schema provides a data-modeling vocabulary for RDF data and is an extension of the basic RDF vocabulary. The IRIs for the namespaces for the RDF Schema and the RDF Syntax are defined in this document. The RDF Schema provides mechanisms for describing groups of related resources and the relationships between these resources which can be used to describe other RDF resources in application-specific RDF vocabularies.
  • RDF 1.1 Semantics This is one of the documents that comprise the full specification of RDF 1.1. It describes semantics for the Resource Description Framework 1.1, the RDF Schema, and RDFS vocabularies.

RDF 1.1 Serializations

There are a number of RDF serialization formats for implementing RDF. The first format was XML/RDF. Subsequent serialization formats have been developed and may be more suited to particular environments.

  • JSON-LD 1.0 Sporny, Manu, Longley, Dave, Kellogg, Gregg, Lanthaler, Markus, and Lindström, Niklas. (2014, Jan.16).A JSON-based Serialization for Linked Data. Recommendation. W3C. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. JSON-LD with RDF tools can be used as a RDF syntax.
  • RDF 1.1 Turtle Terse RDF Triple Language. David Beckett, Berners-Lee, Tim, Prud'hommeaux, Eric, and Carothers, Gavin. (2014, Feb.25). Recommendation. W3C. This document defines Turtle, the Terse RDF Triple Language, a concrete syntax for RDF that allows an RDF graph to be written in a compact, natural text form with abbreviations for common usage patterns and datatypes. Turtle provides levels of compatibility with the N-Triples format and SPARQL.
  • RDF 1.1 XML Syntax This W3C document defines the XML syntax for RDF graphs. W3C. (2014, Feb.25). Recommendation. Gandon, Fabien and Schreiber, Guus. eds.
  • RDFa Core 1.1 Adida, Ben, Birbeck, Mark, McCarron, Shane, and Herman, Ivan. (2015, Mar. 17). Syntax and processing rules for embedding RDF through attributes. 3rd. ed. Recommendation. W3C. RDFa Core is a specification for attributes to express structured data in any markup language. The rules for interpreting the data are generic, so that there is no need for different rules for different formats. The embedded data already available in the markup language (e.g., HTML) can often be reused by the RDFa markup
  • RDFa 1.1 Primer Herman, Ivan, Adida, Ben, Sporny, Manu, and Birbeck, Mark. (2015, Mar. 17). Rich Structured Data Markup for Web Documents. W3C. RDFa (Resource Description Framework in Attributes) is a technique to add structured data to HTML pages directly. This Primer shows how to express data using RDFa in HTML, and in particular how to mark up existing human-readable Web page content to express machine-readable data.

SKOS (Simple Knowledge Organization System)

SKOS is a W3C data model defined as an OWL Full ontology for use with knowledge organization systems including thesauri, classification schemes, subject heading systems, and taxonomies. Many Semanatic Web vocabularies incorporate the SKOS model. The Library of Congress Subject Headings and the Getty Vocabularies are an examples of vocabularies published as SKOS vocabularies.

  • SKOS Simple Knowledge Organization System eXtension for Labels (SKOS-XL) Namespace Document - HTML Variant SKOS-XL defines an extension for SKOS which provides additional support for describing and linking lexical entities.This document provides a brief description of the SKOS-XL vocabulary.
  • SKOS Simple Knowledge Organization System Namespace Document - HTML Variant This W3C document provides an HTML non-normative table of the SKOS vocabulary.
  • SKOS Simple Knowledge Organization System Primer SKOS provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other similar types of controlled vocabulary. This document serves as a user guide for those who would like to represent their concept scheme using SKOS.
  • SKOS Simple Knowledge Organization System Reference This document defines the SKOS namespace and vocabulary. SKOS is a data-sharing standard which aims to provide a bridge between different communities of practice within the library and information sciences involved in the design and application of knowledge organization systems widely recognized and applied in both modern and traditional information systems.

Ontology Development

  • Ontology Development 101: A Guide to Creating Your First Ontology Noy, Natalya F. and McGuiness, Deborah L. This guide discusses the reasons for developing an ontology and the methodology for creating an ontology based on declarative knowledge representation systems.

Open Linked Vocabularies (LOV)

Linked Open Vocabularies

Click on the LOV image to access vocabularies chosen based on quality requirements and publication best practices.

Click on a vocabulary. Look for a circled elipse in the upper right corner, click on it and have fun playing with tools for vocabularies. Explore around a bit more, and find useful information about the vocabulary you have chosen.

  • EMBL-EBI Ontology Lookup Service EMBL-EBI. (2022). Administered by the European Bioinformatics Institute, the Ontology Lookup Services (OLS) serves as a repository for the latest versions of biomedical ontologies. The site also provides access to several services including OxO, a cross-ontology term mapping tool, Zooma which assists in mapping data to ontologies in OLS, and Webulous, a tool for building ontologies from spreadsheets. OLS includes over 270 structured vocabularies and ontologies.
  • Library of Congress Linked Data Service: Authorities and Vocabularies This page provides access to commonly used ontologies, controlled vocabularies, and other lists for bibliographic description including Genre/Form headings, Subject Headings for Children, Thesaurus of Graphic Materials, Preservation Events, Crytographic Hash Functions, schemes, and codelists, etc. A search function is provided. Clicking on Technical Center in the menu on the left will provide information on how to download datasets, searching and querying, and serialization formats.
  • Library of Congress Standard Identifiers Scheme The Standard Identifiers Scheme from the Library of Congress lists standard number or code systems and assigns a URI to each database or publication that defines or contains the identifiers in order to enable these standard numbers or codes in resource descriptions to be indicated by a URI. This is an extensive list which includes for example: Digital Object Identifier, EIDR: Entertainment Identifier Registry; International Article Number; International Standard Book Number; Library of Congress Control Number; Linking International Standard Serial Number; Locally defined identifier; Publisher-assigned music number; Open Researcher and Contributor Identifier; Standard Technical Report Number; U.S. National Gazetteer Feature Name Identifier; Universal Product Code; Virtual International Authority File number; and more.
  • Linked Open Vocabularies (LOV) Use this site to find a list of vetted linked open vocabularies (RDFS or OWL ontologies) used in the Linked Open Data Cloud, which conform to quality requirements including URI stability and availability on the Web, use of standard formats and publication best practices, quality metadata and documentation, an identifiable and trusted publication body, and proper versioning policy. Vocabularies are individually described by metadata and classified by the following vocabulary spaces: General and Meta; Library; City; Market; Space-Time; Media; Science; and Web. They are interlinked using the dedicated vocabulary VOAF. Search the LOV dataset at the vocabularly or element level. LOV Stats provide metric informataion regarding the vocabularies such as the number of vocabulary element occurrences in the LOD, the number of vocabularies in LOV that refer to a particular element, and more.
  • Open Metadata Registry The Registry provides a means to identify, declare, and publish through registration metadata schemas (element/property sets), schemes (controlled vocabularies) and Application Profiles (APs). It supports the machine mapping of relationships among terms and concepts in those schemes (semantic mappings) and schemas (crosswalks). The Registry supports metadata discovery, reuse, standardization, and interoperability locally and globally.
  • RDA Registry The RDA Registry defines vocabularies that represent the Resource Description Access (RDA) element set, relationship designators, and controlled terminologies as RDA element sets and RDA value vocabularies in Resource Description Framework (RDF). The published vocabularies are currently available in several sets which reflect the underlying FRBR conceptual model.
  • TaxoBank Terminology Registry TaxoBank contains information about controlled vocabularies of all types and complexities. The information collected about each vocabulary follows a study conducted by the Joint Information Systems Committee (JISC) of the Higher and Further Education Funding Councils. The site offers additional resources including information on Thesauri and Vocabulary Control - Principles and Practice, and a Glossary of Terms Relating to Thesauri and Other Forms of Structured Vocabulary.
  • Virtual International Authority File (VIAF) VIAF is a utility that matches and links authority files of national libraries. Data are derived from the personal name authority and related bibliographic data of the participating libraries. VIAF is implemented and hosted by OCLC.

The Getty Vocabularies

  • Art & Architecture Thesaurus® Online The Getty Research Institute. The scope of this vocabulary includes terminology needed to catalog and retrieve information about the visual arts and architecture
  • Cultural Objects Name Authority® Online and Iconography Authority (IA) The Getty Research Institute. The Cultural Objects Name Authority ® (CONA) compiles titles, attributions, depicted subjects, and other metadata about works of art, architecture, and other cultural heritage, both extant and historical, physical and conceptual and can be used to record works depicted in visual surrogates or other works. Metadata may be gathered and linked from photo archive collections, visual resource collections, special collections, archives, libraries, museums, scholarly research, and other sources. The Getty Iconography Authority (IA) includes proper names and other information for named events, themes and narratives from religion/mythology, legendary and fictional characters, themes from literature, works of literature and performing arts, and legendary and fictional places.
  • The Getty Vocabularies as Linked Open Data The Getty Art & Architecture Thesaurus (AAT) ®, Thesaurus of Geographic Names (TGN) ®, and the Union List of Artist Names (ULAN) ® are available as Linked Open Data. This link provides access the vocabularies and information regarding how to use them. Examples of URIs for each vocabulary are provided.
  • Getty Vocabularies: Linked Open Data Semantic Representation Vladimir Alexiev, Joan Cobb, Gregg Garcia, Patricia Harpring. (2017, June 13). This document explains the representation of the Getty Vocabularies in semantic format, using RDF and appropriate ontologies. It covers the Art and Architecture Thesaurus (AAT)®, the Thesaurus of Geographic Names (TGN)® and the Union List of Artist Names (ULAN)®.
  • Getty Vocabularies OpenRefine Reconciliation The Getty Research Institute. This page offers information and a tutorial on how to reconcile data sets to the Getty Vocabularies using the browser add-on OpenRefine. Use data reconciliation to compare local data to values in the Getty Vocabularies in order to map to them.
  • Thesaurus of Geographic Names® Online The Getty Research Institute. The scope of this vocabulary spans a wide spectrum of geographic vocabulary in cataloging and scholarship of art and architectural history and archaeology.
  • Traing Materials The Getty Research Institutes. This page provides training materials for the Art & Architecture Thesaurus (AAT)®, the Getty Thesaurus of Geographic Names (TGN)®, the Union List of Artist Names (ULAN)®, the Cultural Objects Name Authority (CONA)®, the Getty Iconography Authority (IA)™, Categories for the Description of Works of Art (CDWA), Cataloging Cultural Objects (CCO), and standards in general. It also provides access conference presentations.
  • Union List of Artist Names® Online The Getty Research Institute. The ULAN is a structured vocabulary containing names and other information about artists, patrons, firms, museums, and others related to the production and collection of art and architecture. Names in ULAN may include given names, pseudonyms, variant spellings, names in multiple languages, and names that have changed over time (e.g., married names).

A schema uses a formal language to describe a database system and refers to how the organization of data in a database is constructed. Several schemas addressing varied domain areas are listed in this box. Scroll down to the Dublin Core box to access information regarding the Dublin Core schema and tools.

  • BIBFRAME (Bibliographic Framework) Initiative This is the homepage for BIBFRAME the Library of Congress' Bibliographic Framework Initiative. BIBFRAME is a replacement for MARC and serves as a general model for expressing and connecting bibliographic data to the Web of Data. Access links to general information, the vocabulary, BIBFRAME implementation register, tools, draft specifications for Profiles, Authorities, and Relationships, a BIBFRAME testbed, webcasts and presentations, and more.
  • BIBFRAME Model & Vocabulary 2.0 This page provides access to three available vocabulary views of the BIBFRAME Vocabulary. The vocabulary is comprised of RDF properties, classes, and relationships between and among them RDF properties, classes, and relationships between and among them.
  • BIBFRAME Pilot (Phase One—Sept. 8, 2015 – March 31, 2016): Report and Assessment Acquisitions & Bibliographic Access Directorate, Library of Congress. (2016, June 16). This document describes Phase One of the Library of Congress' pilot to test the efficacy of BIBFRAME. It includes descriptions of the planning process, what was being tested, the results, and lessons learned that will assist the Library of Congress as it moves to Phase Two of assessing the BIBFRAME model and vocabulary.
  • Bibliographic Framework as a Web of Data: Linked Data Model and Supporting Services Eric Miller, Eric, Ogbuji, Uche, Mueller, Victoria , and MacDougall, Kathy. (2012, Nov. 21). Library of Congress. This document provides an introduction and overview of the Library of Congress, Bibliographic Framework Initiative.
  • bibliotek-o: a BIBFRAME Ontology Extension Bibliotek-o is an ontology extension which defines additions and modifications to BIBFRAME, intended as a supplement to the core BIBFRAME ontology. It provides a set of recommended fragments from external ontologies and an application profile based on its recommended models and patterns. Bibliotek-o ontology extension is a joint product of the Mellon Foundation-funded Linked Data for Libraries Labs and Linked Data for Production projects.
  • bib.schema.org This is a bibliographic extension for schema.org. The page lists the types, properties, and enumeration values for use in describing bibliographic material using schema.org.
  • DataCite Metadata Schema DataCite. (2019, August 16). The DataCite Metadata Schema provides a list of core metadata properties chosen for accurate and consistent identification of resources for citation and retrieval purposes.Recommended use instructions are provided.
  • Direct Mapping of Relational Data to RDF Arenas, Marcelo, Bertails, Alexandre, Prud'hommeaux, Eric, Sequeda, Juan (editors). (2012 Sept.27). This document defines a direct mapping from relational data to RDF with provisions for extension points for refinements within and outside of the document.
  • FAST Linked Data FAST (Faceted Application of Subject Terminology) is an enumerative, faceted subject heading schema derived from the Library of Congress Subject Headings (LCSH). The purpose of adapting the LCSH with a simplified syntax to create FAST is to retain the vocabulary of LCSH while making the schema easier to understand, control, apply, and use. The schema maintains upward compatibility with LCSH, and any valid set of LC subject headings can be converted to headings. The site provides access to searchFAST, a user interface that simplifies the process of heading selection, and to a Web interface for FAST Subject selection available at FAST.
  • JSON Schema Version 7 (Draft). (2019 March 31). This is a vocabulary that provides for the annotation and validation of JSON documents. It can be used to describe data formats, provide human and machine-readable documentation, make any JSON format a hypermedia format, allow the use of URI templates with instance data, describe client data for use with links using JSON Schema., and recognize collection and collection items.
  • Metadata Authority Description Schema (MADS) MADS is an XML schema for an authority element set used to provide metadata about agents (people, organizations), events, and terms (topics, geographics, genres, etc.). It serves to provide metadata about the authoritative entities used in MODS descriptions.
  • Metadata Object Description Schema (MODS) MODS is a bibliographic element set that may be used for a variety of purposes, and particularly for library applications. MODS is an XML schema intended to be able to carry selected data from existing MARC 21 records as well as to enable the creation of original resource description records. It includes a subset of MARC fields and uses language-based tags rather than numeric ones, in some cases regrouping elements from the MARC 21 bibliographic format. It is maintained by the Library of Congress.
  • Metadata Object Description Schema (MODS) - Conversions Access MODS mapping including MARC to MODS, MODS to MARC, RDA to MODS, Dublin Core (simple) to MODS, and MODS to Dublin Core (simple). Style sheets are also available on this page.
  • Music Encoding Initiative (MEI) MEI is an XML DTD for the representation and exchange of comprehensive music information. MEI is a schema that provides ways to encode data from all the separate domains: logical; visual; gestural (performance); and analytical, commonly associated with music. It accommodates bibliographic description required for archival uses. It also addresses relationships between elements, cooperative creation and editing of music markup, navigation within the music structure as well as to external multimedia entities, the inclusion of custom symbols, etc. MEI can record the scholarly textual apparatus frequently found in modern editions of music.
  • R2RML: RDB to RDF Mapping Language Das, Souripriya, Sundara, Seema, Cyganiak, Richard (editors). (2012, Sept. 27). This document describes R2RML, a language for expressing customized mappings from relational databases to RDF datasets. The mappings provide the ability to view existing relational data in the RDF data model, expressed in a structure and target vocabulary of the mapping author's choice.
  • R2RML: RDB to RDF Mapping Language Schema This document defines the R2RML: RDB to RDF Mapping Language schema which is used to specify a mapping of relational data to RDF.
  • Schema.org Schema.org is a vocabulary that can be used with many different encodings, including RDFa, Microdata and JSON-LD to mark up web pages and e-mail messages. Sponsored by Google, Microsoft, Yahoo and Yandex, the vocabularies are developed by an open community process which includes an extension mechanism to enhance the core vocabulary for specific knowledge domains. It's primary function is to provide web page publishers a means by which they can enhance HTML pages so they can be crawled by semantic search engines linking the pages to the web of data.
  • Text Encoding Initiative (TEI) The Text Encoding Initiative (TEI) is a global consortium which develops and maintains a set of Guidelines which specify encoding methods for machine-readable texts. TEI Guidelines have been widely used by libraries, museums, publishers, and individual scholars to present texts chiefly in the humanities, social sciences and linguistics. The site provides information on resources, projects using TEI, a bibliography of TEI-related publications, and TEI related software including Roma, a web-based application to generate P5-compatible schemas and documentation, and OxGarage, a tool for converting to and from TEI. In addition, the site links to a page of tools for use with TEI resources.
  • Thema Thema is a multilingual subject category schema designed for the commercial global book trade industry to meet the needs of publishers, retailers, trade intermediaries, and libraries. Thema aims to reduce the duplication of effort required by the many distinct national subject schema, and to eliminate the need for scheme-to-scheme mapping that inevitably degrades the accuracy of classification, by providing a single scheme for broad international use. It can be used alongside existing national schema.
  • XML 1.0 Bray, Tim, Jean Paoli, Jean, C. M. Sperberg-McQueen, Maler, Eve, Yergeau, François, eds. (2013, Feb. 7). XML 1.0 is a version of the Extensible Markup Language used to store and transport data on the Web. It is both human and machine readable.
  • XML Path Language (XPath) 2.0 Berglund, Anders, Boag, Scott, Chamberlin, Don, Fernández, Mary F., Kay, Michael, Robie, Jonathan, Siméon, Jérôme, (eds.) (2015, Sept. 7). 2nd edition. XPath is an expression language that uses a path notation for navigating through the hierarchical structure of XML documents. XPath 2.0 is a superset of XPath 1.0. It supports a richer set of data types and takes advantage of the type information that becomes available when documents are validated using XML Schema.
  • XQuery 1.0: An XML Query Language Boag, Scott, Chamberlin, Don, Fernández, Mary F., Florescu, Daniela, Robie, Jonathan, Siméon, Jérôme, (eds.). (2015, Sept. 7). 2nd edition. This is a version of XQuery, A query language that uses the structure of XML to express queries across all kinds of data, whether physically stored in XML or viewed as XML via middleware.

Dublin Core

  • Dublin Core Metadata Initiative This site provides specification of all Dublin Core vocabulary metadata terms maintained by the Dublin Core Metadata Initiative, including properties, vocabulary encoding schemes, syntax encoding schemes, and classes.
  • DCMI Application Profile Vocabulary Coyle, Karen, editor (2021, April 9). This vocabulary supports the specification of Dublin Core Tabular Application Profiles (DC TAP). It is used to create a table or spreadsheet that defines the elements of an application profile. The vocabulary is also provided as a comma separated value template for use in a tabular form.
  • DC Tabular Application Profiles (DC TAP) - Primer Coyle, Karen, editor. (2021, April 3). This primer describes DC TAP, a vocabulary and a format for creating table-based application profiles.
  • dctap DCMI. (2021). dctap is a module and command-line utility for reading and interpreting CSV files formatted according to the DC Tabular Application Profiles (DCTAP) model. This document explains the project, installation, sub-commands, model, configuration, and provides a glossary.
  • dctap-python DCMI. dctap requires Python 3.7 or higher. This GitHub page provides information and documentation on installing tap-python.
  • dctap/TAPtemplate.csv Coyle, Karen. (2020, December 2). Access the TAP csv template from this GitHub page.

Legal Schemas

  • Akoma Ntoso Akoma Ntoso is an initiative to develop a number of connected XML standards, languages and guidelines for parliamentary, legislative and judiciary documents, and specifically to define a common document format, a model for document interchange, data schema, metadata schema and ontology, and schema for citation and cross referencing.
  • Legislative Documents in XML at the United States House of Representatives U.S. House of Representatives. This page provides Document Type Definitions (DTD) for use in the creation of legislative documents using XML, links to DTDs, and background information regarding legislative XML. Access element descriptions and content models for bills, resolutions, Amendments, and roll call votes. This initiative was conducted under the direction of the Senate Committee on Rules and Administration and the House Committee on Administration, and with the involvement of the Secretary of the Senate and the Clerk of the House, the Congressional Research Service, the Library of Congress, and the Government Publishing Office.
  • Electronic Court Filing Version 4.01 Plus Errata 01 OASIS. Angione, Adam and Cabral, James, editors. (2014, July 14). This specification defines a technical architecture and a set of components, operations and message structures for an electronic court filing system, and sets forth rules governing its implementation. It was developed by the OASIS LegalXML Electronic Court Filing Technical Committee.

RELATED RESOURCES

  • Akoma Ntoso an open document standard for Parliaments Palmirani, Monica, and Vitali, Fabio. (2014). World e-Parliament Conference. This set of slides describes an open XML standard for legal documents used in Parliamentary processes and judgments.
  • BIBFLOW: A Roadmap for Library Linked Data Transition Smith, MacKenzie, Stahmer, Carl G., Li, Xiaoli, and Gonzalez, Gloria. (2017, March 14). University of California, Davis and Zepheira, Inc. This is the report of the BIBFLOW project which provides a roadmap for libraries to use to transition into Linked Data environment. Recommendations for a phased transition are provided, as well as an analysis of transition tools, workflow transitions, estimated training, and work effort requirements.
  • Library of Congress BIBFRAME Manual Library of Congress. (Revised 2020, May). This is the training manual for the BIBFRAME Editor and BIBFRAME Database.
  • Artists’ Books Thesaurus This controlled vocabulary is for artists’ books. The Thesaurus is administered by the Arts Libraries Society of North America (ARLIS/NA). The platform,currently in draft form, will offer an illustrated, user-friendly guide to exploring and finding vocabulary terms.
  • DCAT (Data Catalog Vocabulary) DCAT is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web. This document defines the schema and provides examples for its use.
  • Data Documentation Initiative (DDI) DDI Alliance. (2021). DDI is a free international standard for describing data produced by by surveys and other observational methods in the social, behavioral, economic, and health sciences. It can be used to document and manage different stages in the research data lifecycle, such as conceptualization, collection, processing, distribution, discovery, and archiving.
  • DOAP (Description of a Project) DOAP is an XML/RDF vocabulary to used to describe software projects, and in particular open source projects. This site hosts the DOAP wiki, and provides links to DOAP validators, generators, viewers, aggregators, and web sites using DOAP.
  • Expression of Core FRBR Concepts in RDF This vocabulary is an expression in RDF of the concepts and relations described in the IFLA report on the Functional Requirements for Bibliographic Records (FRBR). It includes RDF classes for the group 1, 2, and 3 entities described by the FRBR report and properties corresponding to the core relationships between those entities. Where possible, appropriate relationships with other vocabularies are included in order to place this vocabulary in the context of existing RDF work.
  • FOAF (Friend of a Friend) Vocabulary Specification This specification describes the FOAF language used for linking people and information. FOAF integrates three kinds of network: social networks of human collaboration, friendship and association; representational networks that describe a simplified view of a cartoon universe in factual terms, and information networks that use Web-based linking to share independently published descriptions of this inter-connected world.
  • Language of Bindings (LoB) Ligatus, University of the Arts London. Based on SKOS, LoB is a thesaurus which provides terms used to describe historical binding structures. LoB can be used as a lookup resource on the website or as a software service where terms can be retrieved through an application. It can also be used for learning about book structures and materials, the frequency of the occurrence of bookbinding components, or other aspects connected with the book trade.
  • Lexvo.org Lexvo defines global IDs (URIs) for language-related objects, and ensures that these identifiers are dereferenceable and highly interconnected as well as externally linked to a variety of resources on the Web. Data sources include the Ethnologue Language Codes database, Linguist List, Wikipedia, Wiktionary, WordNet 3.0, ISO 639-3 specification, ISO 639-5 specification, ISO 15924 specification, Unicode Common Locale Data Repository (CLDR), et. al. The site provides mappings between ISO 639 standards and corresponding Lexvo.org language identifiers and downloads of Lexvo datasets. Search over 7,000 language identifiers with names in many languages, links to script URIs (Latin and Cyrillic scripts, Indian Devanagari, the Korean Hangul system, etc.), geographic region URIs, etc.
  • OLAC video game genre terms (olacvggt) Online Audiovisual Catalogers Network (OLAC). (2019). Guidelines for OLAC video game genre terms (olacvggt). This vocabulary provides a list of video game genre terms, each of which has a corresponding MARC authority record. Links to the MARC authority records are provided.
  • PeriodO Rabinowitz, Adam T., Shaw, Ryan, and Golden, Patrick. PeriodO is a period gazetteer which documents definitions of historical period names. Definitions include a period name, temporal bounds on the period, an implicit or explicit association with a geographical region, and must have been formally or informally published in a citable source. Period definitions are modeled as SKOS concepts. Temporal extent is modeled using the OWL-Time ontology.
  • Rights Statements The Rights Statements vocabulary provides rights statements for three categories of rights statements - In Copyright, No Copyright, and Other. Statements are meant to be used by cultural heritage institutions to communicate the copyright and re-use status of digital objects to the public. They are not intended to be used by individuals to license their own creations. RightsStatements.org is a joint initiative of Europeana and the Digital Public Library of America (DPLA).
  • Texas Digital Library Descriptive Metadata Guidelines for Electronic Theses and Dissertations, Version 2.0 Potvin, Sarah, Thompson, Santi, Rivero, Monica, Long, Kara, Lyon, Colleen, Park, Kristi. These Guidelines, produced by the Texas Digital Library ETD Metadata Working Group, comprise two documents to guide and shape local metadata practices for describing electronic theses and dissertations. The Dictionary, which lays out the standard, and the Report lays out detailed explanations for rationale, process, findings, and recommendations.

Vocabulary of Interlinked Datasets (VoID)

  • Describing Linked Datasets with the VoID Vocabulary Alexander, Keith, Cyganiak, Richard, Hausenblas, Michael, and Zhao, Jun. (2011, March 3). This document describes the VoID model and how to provide general metadata about a dataset or linkset (and RDF triple whose subject and object are described in different datasets.
  • Vocabulary of Interlinked Datasets (VoID) Digital Enterprise Research Institute, NUI Galway. (2011, March 6). This document describes the formal definition of RDF classes and properties for VoID, an RDF Schema vocabulary for expressing metadata about RDF datasets. It functions as a bridge between publishers and users of RDF data, with applications including data discovery, cataloging, and archiving of datasets.
  • WorldCat Linked Data Vocabulary OCLC's WorldCat Linked Data uses a subset of terms from Schema.org/ as its core vocabulary. Access the list of classes, attributes, and extensions with this link.

For the Getty Vocabularies, please see the Registries, Portals, and Authorities page.

Wikibase and Wikidata

Wikibase is the platform on which Wikidata, A Wikimedia Project, is built. It allows for multi-language instances. For Wikibase Use Cases, see the Wikibase Use Case box on the bottom of the Use Cases page.

Wikimedia Movement

Wikimedia is a global movement that seeks to bring free education to the world vai websites known as Wikimedia Projects. Wikimedia Projects are hosted by the Wikimedia Foundation. Some of these Projects are listed below. Access the full family of Wikimedia Projects here .

  • WikiCite Wikimedia. WikiCite. (2019, July 20). WikiCite is an initiative to develop a database of open citations and linked bibliographic data to better manage citations across Wikimedia projects and languages. Potential applications include ease of discovering publications on a given topic, profiling of authors and institutions, and visualizing knowledge sources in new ways.
  • Wikidata Wikidata is a free linked database that acts as central storage for the structured data of Wikimedia projects including Wikipedia, Wikivoyage, Wikisource, and others. It can be read and edited by both humans and machines. The content of Wikidata is available under a free license, exported using standard formats, and can be interlinked to other open data sets on the linked data web.
  • Wikipedia Wikipedia is the open source encyclopedia within the MediaWiki universe. A page in Wikipedia is an article to which Wikidata can link.
  • Wikimedia Commons Wikimedia Commons is a repository of freely usable media files to which anyone can contribute. Media files from Wikimedia can be linked to Wikidata statements.
  • Wikiquote Wikiquote is a free compendium of sourced quotations from notable people and creative works in every language and translations of non-English quotes. It links to Wikipedia for further information.
  • Wiktionary Wiktionary. (2019, June 27). Wiktionary is the English-language collaborative Wikimedia Project to produce a free-content multilingual dictionary. It aims to describe all words of all languages using definitions and descriptions in English. It includes a thesaurus, a rhyme guide, phrase books, language statistics and extensive appendices.
  • Wikibase DataModel MediWiki. (2019, May 19). This specification describes the structure of the data that is handled in Wikibase. It specifies which kind of information can be contributed to the system. The Wikibase data model provides an ontology for describing real world entities, and these descriptions are concrete models for real world entities. For a less technical explanation of the model, see the Wikibase DataModel Primer.
  • Wikibase DataModel Primer MediaWiki. (2018, August 1). This primer gives a good introduction to the Wikibase data model. It provides an outline and an explanation of the different elements of the Wikibase knowledge base and describes the function of each.

Wikibase Resources

  • Install Docker Compose In order to install a usable instance of Docker Desktop, the installation package must contain Docker Compose. If an instance is missing Docker Compose, this page provides instructions for installing it.
  • Installing a stand-alone Wikibase and most used services This GitHub page provides instructions for establishing a Wikibase instance. It was developed by a member of Wikimedia Deutschland e. V. and four other software developers.
  • Use cases for institutional Wikibase instances Mendenhall, Timothy R., Chou, Charlene, Hahn, Jim, et.al. (2020, May). Developed informally by library staff at Columbia University, Harvard University, New York University, and the University of Pennsylvania this GitHub page provides a wealth of information for for institutions considering installing their own Wikibase instance. Covering a wide range of topics such as local vocabularies, authority control, organizational name changes, cross-platform discovery, multilingual discovery, pipeline to Wikidata and broader web discovery, digital humanities, database, metadata, and exploratory projects, and more, each topic also supplies a use case example.
  • Wikibase Consultants and Support Providers Wikimedia. (2021, Jan. 14). This page list Wikibase service providers who may help with issues with Wikibase instances.
  • Wikibase Install Basic Tutorial Miller, Matt. (2019, September). Semantic Lab at Pratt Institute. This tutorial provides instructions for setting up Wikibase using Docker. The tutorial uses Digital Ocean, and it requires setting up an account at Digitalocean.com.
  • Wikibase Roadmap 2020 High-level Overview (Public) WikiMedia. (2021, Jan. 11). This is an interactive chart that describes Wikibase development initiatives, including Wikibase Features, Wikibase System Improvements, Partnerships & Programs, Documentation, Wikibase Strategy & Governance, and Developer Tools.

Wikidata Alert

  • Wikidata:SPARQL query service/Blazegraph failure playbook Wikidata. (2021, Dec. 13). This Wikidata article describes the proposed steps the Wiki Media Foundation is considering in the event of a catastrophic failure of its SPARQL Query Service powered by Blazegraph. The failure would occur when the amount of query-able data in Blazegraph exceeds Blazegraph's limits.

Wikidata is a free, collaborative, multilingual software application built from Wikibase components that can be read and edited by humans and machines. It collects structured data to provide support for Wikimedia Projects including Wikipedia, Wikimedia Commons, Wikivoyage, Wiktionary, Wikisource, and others. The content of Wikidata is available under a free license, exported using standard formats, and can be interlinked to other open data sets on the linked data web.

  • Wikidata Introduction Wikidata. (2018, June 18). This page provides a quick overview of Wikidata, its function withing the Wikimedia Universe, and an introduction to Wikidata basics.
  • Wikidata List of Policies and Guidelines Wikidata. (2019, January 16). This page lists the proposed and approved policies and guidelines that govern Wikidata covering a wide range of topics including Living people, Deletion policy, Sources, Editing restrictions, Statements, Sitelinks, Verifiability, Administrators, Property creators, CheckUser, and more.
  • Wikidata: Notability Wikidata.Notability. (2019, October 10). This page describes the Wikidata policy that sets forth the criteria needed for an item to be acceptable in Wikidata. It provides a link to a list of Wikimedia pages that are not considered automatically acceptable in Wikidata, and a link to a list of items that have been considered acceptable, in accordance to the general guidelines on this page.
  • Wikidata: Property constraints portal Wikidata. (2020, June 19). Help:Property constraints portal. This page provides information on property constraints including a list of types and links to pages explaining how the constraints should be applied.
  • Wikidata Sandbox Wikidata. (2020, August 24). This page provides a link to the Wikidata Sandbox in which you can experiment and practice using Wikidata. For experimenting with creating new items and properties, use the test.wikidata link on this page.
  • Wikidata Tours (2018, April 7). This page provides access to interactive tutorials showing how Wikidata works and how to edit and add data.

Articles, Development Plans, & Reports

  • ARL White Paper on Wikidata Opportunities and Recommendations Association of Research Libraries (ARL). (April, 2019). In Wikisource. This paper discusses joint efforts between ARL and Wikidata to explore a way to interlink Wikidata to sources of library data and provide libraries and other GLAM institutions the opportunity to get involved in contributing to modeling and data efforts on a larger scale. Some possible contributions include name authorities, institutional holdings, and faculty information. Suggestions for contributing to Wikidata are also explored.
  • Creating Library Linked Data with Wikibase: Lessons Learned from Project Passage OCLC Research. (2019, August). This document describes OCLC's Project Passage, a Wikibase prototype in which librarians from 16 US institutions experimented with creating linked data to describe resources without requiring knowledge of the technical machinery of linked data. The report provides an overview of the context in which the prototype was developed, how the Wikibase platform was adapted for use by librarians, and discusses use cases where participants describe creating metadata for resources in various formats and languages using the Wikibase editing interface. The potential of linked data in library cataloging workflows, the gaps that must be addressed before machine-readable semantic data can be fully adopted and lessons learned are also addressed.
  • Differences between Wikipedia, Wikimedia, MediaWiki, and wiki MediaWiki. (2019, April 19). This article provides a brief description of components and related software of the Wikimedia movement. It also provides links to additional Wikimedia movement resources.
  • Introducing Wikidata to the Linked Data Web Erxleben, Fredo, Gunter, Michael, Krotzsch, Markus, Mendez, Julian, and Vrandecic, Denny. (2014). This document explains the Wikidata model and discusses its RDF encoding. It is a good place to start if you are considering editing Wikidata.
  • Lexemes in Wikidata: 2020 Status Nielsen, Finn Arup. (2020, May). Proceedings of the 7th Workshop on Linked Data in Linguistics, pages 82–86. This article discusses the use of lexemes in different languages, senses, forms, and etymology in Wikidata.
  • Wiki Wikipedia. (2019, August 22). This Wikipedia article explains the features of a wiki knowledge base website and discusses the software, history, implementations, editing, trustworthiness, and other aspects of a wiki.
  • Wikidata:Development Plan [2020] Wikidata. (2020, June 17). This page provides an interactive roadmap to the projects on which the Wikidata Development Team is working during 2020. Clicking on projects in the Wikidata matrix will provide information on projects under the categories: Increase Data Quality & Trust; Build Out the Ecosystem; Encourage More Data Use; Enable More Diverse Data and Users; and Other. The Wikibase matrix includes categories: Wikibase Features; Wikibase System Improvements; Partnerships & Programs; Documentation; Wikibase Strategy & Governance; and Developer Tools.
  • Wikidata: Development Plan [2022] Wikidata. (2022, February 10). This page provides the roadmap for the Wikidata development team (Wikimedia Deutschland) for Wikidata and Wikibase for 2022. Highlights of the plan include empowering the community to increase data quality, strengthen underrepresented languages, increase re-use for increased impact, empowering knowledge curators to share their data, ecosystem enablement, and to connect data across technological & institutional barriers. Some objectives include better understanding of which organizations want to use Wikibase in the future and for what, ensure Wikibases can connect more deeply with each other and Wikidata to form an LOD web, user testing of federated properties in combination with local properties, and more.
  • Wikidata/Strategy/2019 Wikimedia. (2019, August 27). Wikidata/Strategy/2019. This page provides access to a product vision paper and three product strategy papers discussing possible future developments for Wikidata and Wikibase and a very ambitious role in shaping the future of the information commons through 2030. Topics discussed include strategies for making Wikimedia projects ready for the future; maintaining and supporting Wikimedia's growing content; ensuring the integrity of Wikimedia content; furthering knowledge equity; and enabling new ways of consuming and contributing knowledge. There is a strategy paper discussing Wikidata as a platform and another discussing the Wikibase ecosystem.
  • Wikimedia:LinkedOpenData/Strategy2021/Joint Vision Wikimedia. Linked Open Data/Strategy 2021/Joint Vision. This document sets out the Wikibase and Wikidata joint vision for working in Linked Open Data. The document describes the vision, strategy, guiding principles, and approach to building out the Wikibase ecosystem.

Wikidata Related Resources

  • Creating and editing libraries in Wikidata Scott, Dan. (2018, February 18). Dan Scott's blog provides useful linked data information. This blog entry describes how to create a Wikidata item for a particular library. Properties useful for describing libraries and their collections are also provided.
  • Linked Open Data Cloud Wikidata Page This LOD Cloud page provides information about Wikidata, including download links, contact information, SPARQL endpoint, triples count, the Wikidata namespace, and more. It also provides examples of Wikidata concepts using information about Nelson Mandela in Wikidata.
  • MediaWiki MediaWiki. (2019, June 14). This is the MediaWiki main page. MediaWiki is a multilingual, free and open, extensible, and customizable wiki software engine used for websites to collect, organize, and make available knowledge. It was developed for Wikipedia and other WikiMedia Projects. It includes an API for reading and writing data, and support for managing user permissions, page editing history, article discussions, and an index for unstructured text documents.
  • Practical Wikidata for Librarians Wikidata. (2021, Feb. 11). Wikidata:WikiProject Linked Data for Production/Practical Wikidata for Librarians. This page provides a vast array of resources for librarians and archivists interested in editing Wikidata, and provides a space to share data models and best practices. Among the resources are instructional materials, policies, project recipes, verifiability, guidelines for describing entities in particular domains, constraint reports, user scripts, gadgets, and more.
  • User:HakanIST/EntitySchemaList Wikidata. (2021, April 20). This is a list of schemas used for describing Entities in Wikidata compiled by Wikidata user HkanIST.
  • Wikidata editing with OpenRefine Wikidata: Tools/OpenRefine/Editing. (2021, April 25). This page provides links to tutorials, videos and a reference manual demonstration how to use OpenRefine to add and edit items in Wikidata. It also demonstrates using MarkEdit with OpenRefine and Wikidata.
  • Wikidata in Brief Wikimedia Commons. (2017, July 31). This document gives a one page overview of Wikidata.
  • Wikidata Query Service in Brief Stinson, Alex. (2018, March). This document gives a one page overview of the Wikidata query service.
  • Wikimedia Wikibooks. (2019, April 12). This open book provides information on how to use Wikis covering topics including editing, basic markup language, images, templates, categories, namespaces, administrative namespaces, user namespaces, Wikipedia, Wikimedia Commons, Wikibooks, Meta, Wikidata, Wikiversity, Wikispecies, Wikiquotes, Wikivoyage, and more.
  • Works in Progress Webinar: Introduction to Wikidata for Librarians OCLC Research. (2018, June, 12). This OCLC Webinara gives a brief introduction to Wikidata.

WikiProject Universities

  • WikiProject Universities Wikidata. (2019, August 21). The purpose of this WikiProject is to provide better coverage of universities and other research institutions in Wikidata. The goal is to create a comprehensive and rich catalog of institutions, with strong links to other entities in the academic ecosystem (researchers, publications, alumni, facilites, projects, libraries…). The scope of the project includes listing the recommended statements about universities and evaluating their coverage across Wikidata; Building showcase items to demonstrate what an university item should ideally look like; Linking between items about universities and their subunits; Linking items about people to items of the universities they are/were educated at, work(ed) at or were/ are otherwise affiliated with; and providing counts by type, country, etc. Subpages and participants are listed.
  • WikiProject University of California Wikipedia. (2019, February 17). This Wikipedia article describes the WikiProject to improve Wikipedia's coverage of the University of California system, which encompasses University of California campuses, professional schools, facilities and biographies of major figures. The site provides links to WikiProjects for all of the University of California campuses and the UC System.
  • WikiProject Stanford Libraries Stanford Wikidata Working Group. (2019, September 4). This is the page for a WikiProject for work done at Stanford Libraries to connect library data with Wikidata. The page provides useful links, references, and guides covering a wide range of topics including Description guidelines, Wikidata policies and guidelines, Quick reference guides, Property resources, Projects, and much more.
  • WikiProject Books WikiProject Books is used to: define a set of properties to be used by book infoboxes, Commons books templates, and Wikisource; map and import relevant data currently spread in Commons, Wikipedia, and Wikisource; and establish methods to interact with this data from different projects. Based on the Functional Requirements for Bibliographic Records (FRBR) model, Wikimedia projects uses a two level model, "work", and replaces the "expression" and "manifestation" levels of the FRBR model into one "edition" level. Bibliographic properties and qualifiers are listed here.
  • WikiProject Heritage institutions Wikidata. (2019, October 3). This project aims to create a comprehensive, high-quality database of archives, museums, libraries and similar institutions. While the main focus is on institutions that have curatorial care of a collection, the scope of the project includes related institutions, such as lending libraries, exhibition centers, zoos, and the like, to the extent that they are not covered by any other WikiProject. The project also serves to coordinate a range pf activities including the creation of an inventory of all existing public databases that contain data about heritage institutions, the implementation and maintenance of ontologies and multilingual thesauri relating to heritage institutions, the ingestion of data about heritage institutions into Wikidata, the inclusion of the data into Wikipedia and its sister projects, through Wikidata-powered infobox templates or lists, and more.
  • WikiProject Libraries Wikidata. (2019, September 25). The aims of this Wiki Project is to define a structure for libraries and to create and improve the items about library. The page provides item identifiers for types of libraries.
  • WikiProject Linked Data for Production/Practical Wikidata for Librarians Wikidata. (2020, August 25). This project seeks to gather and organize resources for librarians interested in editing Wikidata as well as to prevent duplicative work and provide stepping stones and guidance for librarians interested in working with Wikidata. Resources, links to gadgets and user scripts, information on data modeling, and project recipes are provided.
  • WikiProject Maps Wikidata:WikiProject Maps. (2019: April 25). This Wikidata page provides access to geographic projects in Wikidata, possible properties to use for maps entered into Wikidata, a list of map types, and a link to maps on Wikidata.
  • WikiProject Medicine/National Network of Libraries of Medicine Wikimedia. (2019, September 17). The goals of this Wikipedia project are to Improve the quality of Wikipedia medical related articles using authoritative mental health resources, raise visibility of NLM mental health resources, and promote Wikipedia as an outreach tool for engagement and open data. The project is centered on an edit-a-thon for October and November, 2019.
  • WikiProject Museums Wikidata. (2018, May 22). This project aims to define properties for items related to museums and the rules of use for these properties (qualifiers, datatypes, ...) and to organize the creation and improving the quality of the elements. The page provides suggested properties to use with museum related entities, tools, and example queries.
  • WikiProject Source MetaData Wikidata. WikiProject Source MetaData. (2019, August 11). WikiProject Source MetaData aims to: act as a hub for work in Wikidata involving citation data and bibliographic data as part of the broader WikiCite initiative; define a set of properties that can be used by citations, infoboxes, and Wikisource; map and import all relevant metadata that currently is spread across Commons, Wikipedia, and Wikisource; establish methods to interact with this metadata from different projects; create a large open bibliographic database within Wikidata; and reveal, build, and maintain community stakeholdership for the inclusion and management of source metadata in Wikidata. This page provides information regarding ongoing imports and projects, and a very substantial list of metadata sub-pages belonging to this project.
  • WikiProject Periodicals Wikidata: WikiProject Periodicals. (2019, June 15). This project aims to: define a set of properties from w:Template:Infobox_journal and w:Template:Infobox_magazine (and other languages), especially prior names with year ranges, and standard abbreviations; define a set of properties about periodical publishers, including learned societies; map and import 'Journals cited by Wikipedia; map and import all relevant data to the Wikipedia collection of journal articles at w:Category:Academic journal articles / w:Category:Magazine articles (and other languages), and link these items to the reason for their notability - e.g. the discovery that was made, or event it records; prepare for linking Wikisource collection of journal/magazine articles into Wikidata; map and import all other relevant data that currently is spread in Commons, Wikipedia, and Wikisource; and establish methods to interact with this data from different projects. This page provides lists of properties relevant to periodicals.
  • Citing sources Wikidata. (2019, January 5). This is a list of properties appropriate for citing sources in Wikidata. The list includes such properties as place of publication, imported from Wikimedia project, publisher, author, stated in, chapter, described by source, quote, inferred from, archive date, etc.
  • Wikidata List of Properies Wikidata. (2019, July 22). This page provides access to Wikidata properties by broad description topics. The page also lists tools for browsing properties in different languages, and a download option for all properties.
  • Wikidata property for items about people or organisations Wikidata. (2019, February 7). This is a list of properties that can be used to describe people or organizations. It encompasses a very wide range such as head of state, flag, logo, movement, league, chief executive officer, headquarters location, record label, Queensland Australian Football Hall of Fame inductee ID, field of work, award received, etc.
  • Wikidata property for items about people or organisations/human/authority control Wikidata. (2019, October 5). This is a list of Wikidata name authority control properties for writers, artists, architects, and organizations.
  • Wikidata property for items about works Wikidata. (2019, February 11). This is a list of properties to describe works such as articles, books, manuscripts, authority control for works, plays, media items, musical works, algorithms, software, structures, comics, television programs, works of fiction, and films.

There are many tools developed for working with Wikidata, many which are listed on the Wikidata Tools page listed below. General tools that are helpful with editing and adding items to Wikidata are listed here.

  • Author Disambiguator Wikidata:Tools/Author Disambiguator. (2020, October 2). Author Disambiguator is a tool for editing authors of works recorded in Wikidata, and is partially coordinated with the Scholia project that provides visual representations of scholarly literature based on what can be found in Wikidata. By converting author strings into links to author items a much richer analysis and tracing of relationships between researchers and their works, institutions, co-authors, etc. can be achieved. The tools ability to integrate with Scholia provides enhanced visual analysis.
  • Cradle Manski, Magnus. Cradle is a tool for creating new Wikidata items using a form. A link to existing forms along with their descriptions is provided. It is also possible to compose an original form.
  • Docker Desktop Docker Desktop is a MacOS and Windows application for building and sharing containerized applications and microservices and delivering them from your desktop. It enables the leveraging of certified images and templates in a choice of languages and tools. Docker Desktop uses the Google-developed open source orchestration system for automating the management, placement, scaling, and routing of containers.
  • EntiTree Schibel, Martin. EntiTree generates dynamic, navigable tree diagrams of people, organizations, and events based on information drawn from several sources and linked to Wikipedia articles.
  • FindingGLAMs This tool is a modified version of Monumental used to display information and multimedia about cultural heritage institutions gathered through Wikidata, Wikipedia and Wikimedia Commons. Search by name of institution or explore by geographic region. example item, or city.
  • Miraheze Miraheze is non-profit MediaWiki hosting service created by John Lewis and Ferran Tufan. The service offers free MediaWiki hosting, compatible with VisualEditor and Flow.
  • Monumental Marynowski, Paweł and LaPorte, Stephen. (2017). This tool displays information and multimedia about cultural heritage monuments gathered through Wikidata, Wikipedia and Wikimedia Commons. Explore by entering a name of a monument, geographical region, example monument, or city.
  • OpenRefine Wikidata:Tools/OpenRefine. (July 17,2019). OpenRefine is a free data wrangling tool used to clean tabular data and connect it with knowledge bases, including Wikidata. This page provides recipes, instructions, and resources to tutorials.
  • osm-wikidata Betts, Edward. Downloaded October 22, 2019. Use this tool to match Open Street Map (OSM) Entities with Wikidata Items. It uses the Wikidata SPARQL query service and the OSM Overpass and Nominatim APIs. Installation and configuration instructions are provided.
  • Scholia Nielsen, Finn Arup, Mietchen, Daniel, et. al. (2020). Scholia is a service which uses the information in Wikidata to create visual scholarly profiles for topic, people, organizations, species, chemicals, etc. It can be used with the Author Disambiguator tool to generate bar graphs, bubbles charts. line graphs, scatter plots, etc.
  • Semantic MediaWiki Krötzsch, Markus. (2020, Apr. 19). Semantic MediaWiki (SMW) is an open source extension for MediaWiki, the software that powers Wikipedia. It provides the ability to store data in wiki pages, and query it elsewhere, thus turning a wiki that uses it into a semantic wiki.
  • Wikidata:SourceMD Wikidata. (2019, February 10). SourceMD,aka Source Metadata Tool, can be used to take the persistent Wikidata identifier for a scholarly article or book to automatically generate Wikidata items using metadata from scholarly publications. The tool works with these identifiers: ISBN-13 (P212); DOI (P356); ORCID iD (P496); PubMed ID (P698); and PMCID (P932).
  • Wikidata Tools (2019, January 18). This page provides a list of tools to ease working with Wikidata, inlcuding a property list, query tools, lexicographical data tools, tools for editing items, data visualization tools, a Wikidata graph builder, and more.
  • Wikimedia Programs & Events Dashboard Wikimedia. (2020, January 21). The Programs & Events Dashboard is a management tool used to initiate and organize edit-a-thons, campaigns, and other wiki events. It provides instructions, registration functions, tracking functions to measure and report the outcome of a program (number of editors, number of edits, items created, references added, number of views, etc.).

This page provides access to documents and reports associated with workshops, institutions, organizations, or other entities which relate valuable information, or describe initiatives or projects regarding the Semantic Web or Linked Data.

  • Addressing the Challenges with Organizational Identifiers and ISNI Smith-Yoshimura, Karen, Wang, Jing, Gatenby, Janifer, Hearn, Stephen, Byrne, Kate. (2016). This webinar discusses documenting the challenges, use cases, and scenarios where the International Standard Name Identifier (ISNI) can be used to disambiguate organizations by using a unique, persistent and public URI associated with the organization that is resolvable globally over networks via specific protocols, thus providing the means to find and identify an organization accurately and to define the relationships among its sub-units and with other organizations.
  • BIBCO Mapping BSR to BIBFRAME 2.0 Group: Final Report to the PCC Oversight Group BBIBCO Mapping BSR to BIBFRAME 2.0 Group. (2017, July). This report summarizes the BIBCO Mapping the BIBCO Standard Record (BSR) to BIBFRAME 2.0 group's work and identifies issues that require further discussion by the Program for Cooperative Cataloging (PCC).
  • BIBCO Standard Record to BIBFRAME 2.0 Mapping BIBCO Mapping BSR to BIBFRAME 2.0 Group. (2017, July). This spreadsheet maps BIBCO Standard Record elements to BIBFRAME 2.0. Amid the information included in the spreadsheet are RDA instructions & elements, MARC coding, rda-rdf properties as defined in the RDA Registry, Triple statements needed to properly map the element, and specific instructions pertaining to elements.
  • BIBFLOW BIBFLOW is a two-year project of the UC Davis University Library and Zepheira, funded by IMLS. Its official title is “Reinventing Cataloging: Models for the Future of Library Operations.” BIBFLOW’s focus is on developing a roadmap for migrating essential library technical services workflows to a BIBFRAME / LOD (LOD) ecosystem. This page collects the specific library workflows that BIBFLOW will test by developing systems to allow library staff to perform this work using LOD native tools and data stores. Interested stakeholders are invited to submit comments on the workflows developed and posted on this site. Information from comments will be used to adjust testing as the project progresses.
  • British Library Data Model This is the British Library's data model for a resource.
  • British Library Data Model - Book This is the British Library's data model for cataloging a book in a Semantic Web environment.
  • British Library Data Model - Serial This is the British Library's data model for cataloging a serial in a Semantic Web environment. This is the British Library's data model for cataloging a serial in a Semantic Web environment.
  • Common Ground: Exploring Compatibilities Between the Linked Data Models of the Library of Congress and OCLC Jean Godby,Carol and Denenberg, Ray. (2015, Jan.). Library of Congress and OCLC Research. This white paper compares and contrasts the Bibliographic Framework Initiative at the Library of Congress and OCLC’s efforts to refine the technical infrastructure and data architecture for at-scale publication of linked data for library resources in the broader Web.
  • CONSER CSR to BIBFRAME Mapping Task Group: [Final Report] of the PCC BIBFRAME Task Group CONSER CSR to BIBFRAME Mapping Task Group. (2017). This report summarizes the mapping outcomes and recommendations of the group for mapping CONSER Standard Record (CSR) elements to BIBFRAME 2.0. It also identifies several issues that will require further discussion.
  • CONSER Standard Record to BIBFRAME 2.0 Mapping CONSER CSR to BIBFRAME Mapping Task Group. (2017, July). This spreadsheet maps the CONSER Standard Record (CSR) elements to BIBFRAME 2.0. The spreadsheet "Examples' column contains links to sample code documents containing Turtle serializations of each CSR element in BIBFRAME.
  • Europeana pro Europeana Foundation. This site provides a detailed description of the European Union's Linked Open Data initiative, including a history, the Europeana Data Model, a list of namespaces used, tools, and more.
  • Game Metadata and Citation Project (GAMECIP) This University of California Santa Cruz and Stanford University project is developing the metadata needs and citation practices surrounding computer games in institutional collections. It seeks to address the problems of cataloging and describing digital files, creating discovery metadata, and providing access tools associated with the stewardship of digital games software stored by repositories. The site provides information regarding tools and vocabularies under development.
  • IIIF Explorer OCLC ResearchWorks. (2020). The IIIF Explorer is a prototype tool that searches across an index of all of the images in the CONTENTdm digital content management systems hosted by OCLC.
  • Library of Congress Labs The Library of Congress Labs site shares experimental initiatives the Library is conducting with its digital collections. Access videos, reports, presentations, and APIs. Clicking on the LC for Robots tab provides bulk data for Congressional bills, MARC records (in UTF-8, MARC8, and XML), Chronicling America, and more. The site demonstrates how to interact with the Library's collection.
  • Linked Art Linked Art is a Community working on creating a shared model to describe art based on Linked Open Data. The site lists partner projects, consortia, and institutions.
  • Linked Data for Libraries (LD4L) LD4L is a collaborative project of Cornell University Library, the Harvard Library Innovation Lab, and the Stanford University Libraries. The project is developing a Linked Data model to capture the intellectual value added to information resources when they are described, annotated, organized, selected, and used, along with the social value evident from patterns of usage.
  • Linked Data for Production: Closing the Loop (LD4P3) LD4P3 aims to create a working model of a complete cycle for library metadata creation, sharing, and reuse. LD4P3 builds on the foundational work of LD4P2: Pathway to Implementation, LD4P Phase 1, and Linked Data for Libraries Labs (LD4L Labs). Access the statement of objectives for two domain projects, one for cartographic material and one for film/moving image resources.
  • Linked Data for Production: Pathway to Implementation (LD4P2) Futornick, Michelle. (2019, January 14). LD4P Phase 2 builds upon the work of Linked Data for Production (LD4P) Phase 1 and Linked Data for Libraries Labs (LD4L Labs). This phase marks the beginning of implementing the cataloging community’s shift to linked data for the creation and manipulation of their metadata. Access information regarding the seven goals of Phase 2 outlined by the institutions collaborating on the project: Cornell; Harvard; Stanford; the University of Iowa School of Library and Information Science; and the Library of Congress and the Program for Cooperative Cataloging (PCC).
  • Linked Data Wikibase Prototype OCLC Research. In partnership with several libraries, OCLC has developed a prototype to demonstrate the value of linked data for improving resource-description workflows in libraries. The service is built on the Wikibase platform to provide three services: a Reconciler to connect legacy bibliographic information with linked data entities; a Minter to create and edit new linked data entities; and a Relator to view, create, and edit relationships between entities.
  • Looking Inside the Library Knowledge Vault Washburn, Bruce and Jeff Mixter, Jeff. (2015, Aug.26). This is a U-Tube recording of an OCLC Research Works in Progress webinar describing how OCLC Research is evaluating the Google Knowledge Vault model to test an approach to building a Library Knowledge Vault.
  • OCLC Data strategy and linked data This page describes OCLC library bibliographic initiatives focusing on designing and implementing new approaches to re-envision, expose, and share library data as entities that are part of the Semantic Web.
  • RDA Input Form The RDA Input Form is a proof-of-concept experiment created by the Cataloging and Metadata Services of the University of Washington to demonstrate that RDA cataloging (input) can be easily output in multiple schemas using a processing pipeline and mappings. The form focuses on PCC core and output is in RDA/RDF and BIBFRAME in RDF-XML. The experiment showed that output in these schemas can be generated in an automated fashion using a pipeline. Implications for future production cataloging systems is that input and output should not be directly tied to each other, and cataloging systems should have sufficient flexibility to output in multiple schemas, which can be achieved in an automated way.
  • Report of the Stanford Linked Data Workshop This report includes a summary of the workshop agenda and a chart showing the use of Linked Data in cultural heritage venues for the workshop held at Stanford University June 27 - July 1, 2011.
  • rightsstatements.org This GitHub page provides access to the request for proposals issued by the International Rights Statements Working Group, a joint Digital Public Library of America (DPLA) and Europeana Foundation working group to develop and implement a technical infrastructure for a rights statements application, a content management system, and a server configuration, deployment, and maintenance implementation for rights management. Links to a PDF version of the request and a PDF version of the "Requirements for the Technical Infrastructure for Standardized International Rights Statements" are provided.
  • Schema Bib Extend Community Group This W3C group was formed to discuss and prepare proposal(s) for extending Schema.org schemas for the improved representation of bibliographic information markup and sharing. Access the group wiki, contact information, a mailing list, information regarding joining the group, information about proposals, an RSS feed, and recipes and guidelines.
  • SHARE Virtual Discovery Environment project Casalini Libri, @Cult, and participating libraries. The aim of this project (Share-VDE Project) is to design a flexible configuration that uses the paradigms of the Semantic Web to provide a way for libraries to handle their data related to information management, enrichment, entity identification, conversion, reconciliation, and publication processes of the Semantic Web as independently as possible. The project provides a prototype of a virtual discovery environment with a three BIBFRAME layer architecture (Person/Work, Instance, Item) established through the individual processes of analysis, enrichment, conversion, and publication of data from MARC21 to RDF. Records from libraries with different systems, habits, and cataloguing traditions were included in the prototype.
  • Stanford Linked Data Workshop Technical Plan This report summarizes the output of the Linked Data in cultural heritage venues workshop held at Stanford University June 27 - July 1, 2011.
  • Stanford Tracer Bullets Futornick, Michelle. (2008, August 6). This Stanford Linked Data production project focused on all the steps to transition to a linked data environment in four technical services workflows: copy cataloging through the Acquisitions Department, original cataloging, deposit of a single item into the Stanford Digital Repository, and deposit of a collection of resources into the Stanford Digital Repository.
  • Wikipedia + Libraries: Better Together Wikipedia + Libraries: Better Together was an 18-month OCLC project to strengthen the ties between US public libraries and English Wikipedia which ended in May, 2018. Information provided includes how librarians use and contribute to Wikipedia, teach information literacies using Wikipedia, and use Wikipedia for events. Training materials are provided.
  • The Europeana Linked Open Data Pilot Haslhofer, Bernhard and Isaac, Antoine. Proc. In Int’l Conf. on Dublin Core and Metadata Applications 2011. This is the model developed to make metadata available from Europeana data providers as Linked Open Data. The paper describes the model and experiences gained with the Europeana Data Model (EDM), HTTP URI design, and RDF store performance.

This page provides links to examples of Linked Data currently in use.

  • BBC Academy: Linked Data The British Broadcasting Company (BBC) is an early experimenter and adopter of Linked Data. The BBC Backstage project, working with Wikipedia, developed and produced content rich prototypes showing the potential of Linked Data. Explore this site to experience the hidden power seamless exploitation of Linked Data.
  • Becoming Data Native: How BIBFRAME Extensibility Delivers Libraries a Path to Scalable, Revolutionary Evolution Miller, Eric. (2017). Zepheira and The Library.Link Network. This is a PowerPoint presentation by Eric Miller presented at the 2017 American Library Association conference. It describes how third party linked data library vendor Zepheira uses BIBFRAME in its iterations to connect library collections to the linked data cloud, including the Library of Congress collection.
  • BIBFRAME 2.0 Implementation Register Library of Congress. The BIBFRAME 2.0 implementation register lists existing, developing, and planned implementations of BIBFRAME 2.0, the Library of Congress' replacement for MARC.
  • The British National Bibliography The BNB is the single most comprehensive listing of UK titles, recording the publishing activity of the United Kingdom and the Republic of Ireland. It includes print publications since 1950 and electronic resources since 2003.
  • Dallas Public Library This Dallas Public Library site demonstrates a Library.Link Network instance of library resources implemented by third party linked data vendor, Zepheira.
  • Data.gov Data.gov is the open data initiative of the United States government. It provides federal, state and local data, tools, and resources to conduct research, build apps, design data visualizations, and more. Data are provided by hundreds of organizations and Federal agencies, and the code is open source. The data catalog is powered by CKAN, and the content seen is powered by WordPress.
  • data-hnm-hu - Hungarian National Museum Datasets The Hungarian National Museum has made its Linked Data datasets available on datahub. As a means of familiarizing Hungarian librarians with BIBFRAME, the datasets were published so that the BIBFRAME and MARC descriptions were crossed linked. Conversion features work and entity recognition and name entities are linked to external datasets.
  • dblp computer science bibliography Schloss Dagstuhl - Leibniz Center for Informatics. (2020, January 4). dblp is an on-line reference database providing free access to high-quality bibliographic meta-data and links to the electronic editions of computer science publications. When an external website that hosts an electronic edition of a research paper is known, a hyperlink together with the bibliographic meta-data is provided. Some links require subscriptions and some are open access.
  • Digital Public Library of America (DPLA) The Digital Public Library of America is a portal that brings together and makes freely available digitized collections of America’s libraries, archives, and museums. More than that, DPLA is a platform that provides developers, researchers, and others the ability to create tools and applications for learning, and discovery. This is a site worth exploring to see the next generation library. Click on Bookshelf to search for a book. Visit the Apps page to find ways of accessing DPLA's resources. DLPA uses Krikri, a Ruby on Rails engine for metadata aggregation, enhancement, and quality control as part of Heiðrún, its metadata ingestion system.
  • DTU Orbit - The Research Information System DTU Orbit is the official research database of the Technical University of Denmark, DTU. Available to browse in standard web browsers and in addition to providing open access to articles, it provides a linked data type graph interface to cross search publications, projects, activities, department profiles and staff profiles related to publications to which DTU employees have contributed.
  • English Language Books listed in Printed Book Auction Catalogues from 17th Century Holland Alexander, Keith. This datahub dataset lists books in the English language section of Dutch printed book auction catalogues of collections of scholars and religious ministers. For access to this data set and other auction catalogues, see the Printed Book Auction Catalogues resource.
  • Europeana Pro Europeana Foundation. This is the European Union's initiative to share its countries' rich cultural heritage resources. Information regarding APIs, tools, grants, and events are also provided.
  • Harvard LibraryCloud APIs Created by Licht, Jeffrey Louis, last modified by Wetherill, Julie M. (2019, May 6). Library Cloud is a metadata service that provides open, programmatic access to item and collection APIs that provide search access to Harvard Library collections metadata.
  • Ligatus Ligatus. (2021). Ligatus is part of an initiative of the University of the Arts London conducting research on documentation in historical libraries and archives. Some of the projects include the Language of Bindings Thesaurus, Linked Conservation Data, Artivity (a tool capturing contextual data produced during the creative process of artists and designers while working on a computer), The St. Catherine's Project (conservation support for the unique monastery library in Sinai), and Archive as Event (online archive of the artist John Latham structured using Creative Archiving principles based on Latham's ideas).
  • Linked Jazz This Pratt Institute project is built around oral histories of jazz musicians from Rutgers Institute for Jazz Studies Archives, Smithsonian Jazz Oral Histories, the Hamilton College Jazz Archive, UCLA’s Central Avenue Sounds series, and the University of Michigan’s Nathaniel C. Standifer Video Archive of Oral History. Tools developed for the project include the Linked Jazz Transcript Analyzer, a Name Mapping and Curator Tool, the crowd sourcing tool Linked Jazz 52nd Street, and the Linked Jazz Network Visualization Tool. The project also used Ecco! - a Linked Open Data application for entity resolution designed to disambiguate and reconcile named entities with URIs from authoritative sources.
  • London DataStore The London DataStore is a free and open data-sharing portal providing access to over 500 datasets about London.
  • National Széchényi Library catalogue (National Library of Hungary) The National Széchényi Library provides an example of a library Linked Data interface. Use the search box to perform a search. Click on "Semantic Web" under "Services" and click on Semantic web to learn more about this library's service and its move to Virtuoso.
  • OCLC Research This page shows OCLC's current research projects on libraries, metadata, collections, library enterprises, and more.
  • Office of the Historian Department of State, United States. The Office of the Historian publishes the Foreign Relations of the United States and a Guide to Country Recognition and Relations, and the World Wide Diplomatic Archives Index. Among other resources provided by the Office are bibliographic information about U. S. Presidents and Secretaries of State, information about travels of the President and Secretaries of State, Visits by Foreign Heads of State, and more. The office is using the TEI Processing Model and eXistdb for publishing its documents on the Web.
  • Organization Name Linked Data The Organization Name Linked Data (ONLD) is based on the North Carolina State University Organization Name Authority, a tool maintained by the Libraries' Acquisitions & Discovery department to manage the variant forms of name for journal and e-resource publishers, providers, and vendors in their local electronic resource management system (ERMS). Names chosen as the authorized form reflect an acquisitions, rather than bibliographic, orientation. Data is represented as RDF triples using properties from the SKOS, RDF Schema, FOAF and OWL vocabularies. Links to descriptions of the organizations in other linked data sources, including the Virtual International Authority File, the Library of Congress Name Authority File, Dbpedia, Freebase, and International Standard Name Identifier (ISNI) are provided.
  • SHARE Catalogue @ Cult Rome Italy. Scholarly Heritage and Access to Research Catalog (SHARE Catalogue) is a portal providing a single point of access to the entirety of the integrated resources of eight Italian libraries organized according to the BIBFRAME linked data model.
  • Share-VDE (Virtual Discovery Environment) Share-VDE is a library-driven initiative which collects the bibliographic records and authority files in a shared discovery environment using Linked Data. It is a collaborative endeavor between Casilini Libri, @CULT, the Program for Cooperative Cataloging, international research libraries, and the LD4P project. The Share-VDE interface provides wide-ranging and detailed search results to library patrons. Each library received the information corresponding to its own catalog in Linked Data which may be re-used according to local requirements with no restrictions.
  • Text Creation Partnership (TCP) The TCP is making available standardized, accurate XML/SGML encoded electronic text editions from Early English Books Online (EEBO-TCP), Eighteenth Century Collections Online (ECCO-TCP), Evans Early American Imprints (Evans-TCP), and EEBO-TCP Collections: Navigations. Texts are from ProQuest’s Early English Books Online, Gale Cengage’s Eighteenth Century Collections Online, and Readex’s Evans Early American Imprints and are made available through through web interfaces provided by the libraries at the University of Michigan and University of Oxford.
  • University of Edinburgh Wikimedian in Residence University of Edinburgh. (2021). This page lists Wikidata Use Cases from the University of Edinburgh's collaboration with Wikimedia UK. Cases which have garnered international acclaim and served as inspiration for other research and collaborations include Scottish Witches, The Aberdeen Tower Block Archives, Documenting Biomedical Sciences: The Gene Wiki Project, Mapping the Scottish Reformation, Digitising Collections at the National Library of Wales, and others. Projects developed student skills as they surfaced data from MS Access databases to Wikidata as structured, machine-readable, linked open data.
  • University of Southampton Open Data Service University of Southampton Open Data service has developed several mobile apps based on datasets using linked data. The data sets cover all aspects of university life including academic sessions, campus map, buildings, disabilities informaton, food services, organizations, and more. This initiative won the Times Higher Award in 2012 for Outstanding ICT Initiative of the Year, and a Cost Sector Catering award in 2015 for Best Innovation in Catering.

Wikibase Use Cases

  • Enslaved Michigan State University, Matrix: Center for Digital Humanities & Social Sciences. This Wikibase instance provides for the exploration of individuals who were enslaved, owned slaves, or participated in the historical trade. Search over numerous datasets and browse interconnected data, generate visualizations, and explore short biographies of enslaved and freed peoples.
  • The EU Knowledge Graph European Commission. (2021, March 29). This Wikibase instance contains structured information about the European Union. Click on the Kohesio link to see the Project Information Portal for Regional Policy, which showcases how linked data can be uses to provide local policy information regarding different topics.
  • Rhizome Artbase Rhizome provides a dataset for born-digital artworks from 1999 to the present day using the Wikibase platform. Search by date or artist name. Some entries include external links to artworks maintained by artists or others, archived copies hosted on Rhizome infrastructure, and documentation. The instance provides timeline capability and uses its own ontology data model that integrates with Wikidata and other standards.

University of Edinburgh Wikimedian in Residence Projects

  • Mapping the Scottish Reformation This project maps the Scottish Reformation by tracing clerics across early modern and modern Scotland using information from a database of the Scottish clergy generated by Wikidata. Information from this database runs parallel to another University of Edinburgh project, Scottish Witches.
  • Sottish Witches Access the data visualizations of geolocation information pulled from the Survey of Scottish Witchcraft by Geology and Physical Geography student Emma Carroll. The work transformed the Survey from a static database to an acclaimed interactive linked open data collaboration with Wikimedia and with the support from Ewan McAndrew, University of Edinburgh’s Wikimedian in Residence. Information about the project is available.
  • Last Updated: Feb 26, 2024 1:06 PM
  • URL: https://guides.library.ucla.edu/semantic-web

Introduction to the Semantic Web Technologies

  • Reference work entry
  • Cite this reference work entry

semantic web thesis topics

  • John Domingue 4 ,
  • Dieter Fensel 5 &
  • James A. Hendler 6  

6761 Accesses

15 Citations

1 Introduction

The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation [ 6 ].

For newcomers to the Semantic Web, the above definition taken from the article, which is often taken as the starting point for the research area, is as good a starting point as any. The goal of the Semantic Web is in some sense a counterpoint to the Web of 2001. That Web was designed as a global document repository with very easy routes to access, publish, and link documents, and Web documents were created to be accessed and read by humans.

The Semantic Web is a machine-readable Web. As implied above, a machine-readable Web facilitates human–computer cooperation. As appropriate and required, certain classes of tasks can be delegated to machines and therefore processed automatically. Of course, the design possibilities for a machine-readable Web are very large, and a number of...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Adida, B., Birbeck, M. (eds.): RDFa primer: bridging the human and data webs, W3C Working Group Note (Oct 2008)

Google Scholar  

Antoniou, G., van Harmelen, F.: A Semantic Web Primer, 2nd edn. MIT Press, Cambridge (2008)

Baader, F., Calvanese, D., McGuinness, D., Nardi, D., Patel-Schneider, P. (eds.): The Description Logic Handbook. Cambridge University Press, Cambridge (2003)

MATH   Google Scholar  

Benjamins, V.R., Plaza, E., Motta, E., Fensel, D., Studer, R., Wielinga, B., Schreiber, G., Zdrahal, Z., Decker, S.: IBROW3 an intelligent brokering service for knowledge-component reuse on the world-wide web. In: Proceedings of the 11th Banff Knowledge Acquisition for Knowledge-Based System Workshop (KAW 1998), Banff. http://ksi.cpsc.ucalgary.ca/KAW/KAW98/benjamins3/ (1998). Accessed Aug 2010

Berners-Lee, T.: Information management: a proposal. March 1989 and later redistributed unchanged apart from the date added in May 1990. http://www.w3.org/History/1989/proposal.html (1989). Accessed Aug 2010

Berners-Lee, T., Hendler, J., Lassila, O.: The semantic web. Scientific American Magazine , pp. 29–37 (May 2001)

Bizer, C., Heath, T., Berners-Lee, T.: Linked data – the story so far. Int. J. Semant. Web Inf. Syst. 5 (3), 1–22 (2009)

Article   Google Scholar  

Brachman, R.J., Levesque, H.J.: Knowledge Representation and Reasoning. Morgan Kaufmann, San Francisco (2004)

Brickley, D., Guha, R.V. (eds.): RDF vocabulary description language 1.0: RDF schema. W3C Recommendation (Feb 2004)

Brin, S., Page, L.: The anatomy of a large-scale hypertextual web search engine. Comput. Netw. ISDN Syst. 30 (1–7), 107–117 (1998)

Bush, V.: As we may think. The Atlantic Monthly, July 1945. http://www.theatlantic.com/past/docs/unbound/flashbks/ computer/bushf.htm (1945). Accessed Aug 2010

Cerf, V., Kahn, B.: “A Protocol for Packet Network Interconnection”, which specified in detail the design of a Transmission Control Protocol (TCP) (1974)

Chen, W., Kiefer, M., David, S.W.: HiLog: a foundation for higher-order logic programming. J. Log Program 15 (3), 187–230 (1993)

Article   MATH   Google Scholar  

Clocksin, W.F., Mellish, C.S.: Programming in Prolog, 5th edn. Springer, New York (2003)

Book   MATH   Google Scholar  

Codd, E.F.: The Relational Model for Database Management: Version 2. Addison-Wesley Longman, New York (1990)

Dean, M., Schreiber, G. (eds.): OWL web ontology language reference, W3C Recommendation (Feb 2004)

Feigenbaum, E.A.: The art of artificial intelligence: themes and case studies of knowledge engineering. In: Proceedings of the Fifth International Joint Conference on Artificial Intelligence (IJCAI 1977), Cambridge (1977)

Fensel, D., Decker, S., Erdmann, M., Studer, R.: Ontobroker: the very high idea. In: Proceedings of the 11th International Florida Artificial Intelligence Research Society Conference (FLAIRS 1998), Sanibel Island, pp. 131–135 (1998)

Fensel, D., Angele, J., Decker, S., Erdmann, M., Schnurr, H.-P., Studer, R., Witt, A.: Lessons learned from applying AI to the web. J. Cooperat. Inf. Syst. 9 (4) (2000)

Fensel, D., van Harmelen, F., Horrocks, I., McGuinness, D.L., Patel-Schneider, P.F.: OIL: an ontology infrastructure for the semantic web. IEEE Intell. Syst. 16 (2), 38–45 (2001)

Fensel, D., Lausen, H., Polleres, A., De Bruijn, J., Stollberg, M., Roman, D., Domingue, J.: Enabling Semantic Web Services: The Web Service Modeling Ontology. Springer, New York (2007)

Fensel, D.: Ontologies: Silver Bullet for Knowledge Management and Electronic Commerce. Springer, Berlin (2001), 2nd edn. Springer (2003)

Garcia-Molina, H., Ullman, J.D., Widom, J.: Database Systems: The Complete Book, 2nd edn. Prentice Hall, New Jersey (2009)

Giarratano, J.C., Riley, G.D.: Expert Systems: Principles and Programming, 4th edn. PWS, Boston (2004)

Gruber, T.R.: A translation approach to portable ontology specifications. Knowl. Acquis. 5 (2), 199–220 (1993)

Gruber, T.: Siri: a virtual personal assistant. Keynote Presentation at Semantic Technologies Conference. http://tomgruber.org/writing/semtech09.htm (2009). Accessed Aug 2010

Gruber, T.: Big Think Small Screen: how semantic computing in the cloud will revolutionize the consumer experience on the phone. Keynote Presentation at Web 3.0 Conference. http://tomgruber.org/writing/web30jan2010.htm (2010). Accessed Aug 2010

Halpin, H., Davis, I. (eds.): GRDDL primer, W3C Working Group Note (June 2007)

Hedman, S.: A First Course in Logic. Oxford University Press, Oxford (2004)

Horrocks, I.: Using an expressive description logic: FaCT or fiction? In: Proceedings of the Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR’ 1998), pp. 636–647 (1999)

Horrocks, I., Patel-Schneider, P.F., Boley, H., Tabet, S., Grosof, B., Dean, M.: SWRL: A semantic web rule language combining OWL and RuleML, W3C Member Submission (May 2004)

ter Horst, H.J.: Completeness, decidability and complexity of entailment for RDF schema and a semantic extension involving the owl vocabulary. J. Web Semant. 3 (2–3), 79–115 (2005)

Isaac, A., Summers, E. (eds.): SKOS simple knowledge organization system primer, W3C Working Group Note (Aug 2009)

Jurafsky, D., Martin, J.H.: Speech and Language Processing, 2nd edn. Prentice Hall, New Jersey (2009)

Kelly, J.: The Essence of Logic. Prentice Hall, New Jersey (1997)

Kifer, M., Boley, H. (eds.): RIF overview, W3C Working Group Note (June 2010)

Kifer, M., Lausen, G., Wu, J: Logic foundations of object-oriented and frame-based systems. J. ACM 42 , 741–843 (1995)

Article   MathSciNet   MATH   Google Scholar  

Lausen, H., Farrell, J. (eds.): Semantic annotations for WSDL and XML schema, W3C Recommendation (Aug 2007)

Lloyd, J.W.: Foundations of Logic Programming, 2nd edn. Springer, Berlin (1987)

Luke, S., Specto, L., Rager, D., Hendler, J.: Ontology-based Web agents. In: Proceedings of the First International Conference on Autonomous Agents (ICAA 1997), Marina del Rey, pp. 59–66 (1997)

Manning, C.D., Raghavan, P., Schutze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008)

Manola, F., Miller, E. (eds.): RDF primer, W3C Recommendation (Feb 2004)

Maslow, H.: The Psychology of Science. Harper & Row, New York (1966)

Mauritius National Assembly: The constitution. http://www.gov.mu/portal/AssemblySite/menuitem.ee3d58b2c32c60451251701065c521ca/ . Accessed 6 Sept 2010

Mead, G.H.: Mind, Self, and Society. The University of Chicago Press, Chicago (1934)

Moens, M.-F.: Information Extraction: Algorithms and Prospects in a Retrieval Context. Springer, New York (2006)

Motik, B., Grau, B.C., Horrocks, I., Wu, Z., Lutz, C. (eds.): OWL 2 web ontology language profiles, W3C Recommendation (Oct 2009)

Nelson, T.H.: A file structure for the complex, the changing, and the indeterminate. In: Proceedings of the 20th National Conference, Association for Computing Machinery, New York (1965)

Patel-Schneider, P.F., Hayes, P., Horrocks, I.: OWL web ontology language semantics and abtrsct syntax. Section 5. RDF-compatible model-theoretic semantics, W3C (2004)

Pingdom: Internet 2009 in numbers. http://royal.pingdom.com/2010/01/22/internet-2009-in-numbers/ (2010). Accessed 6 Sept 2010

Prud’hommeaux, E., Seaborne, A.: SPARQL query language for RDF, W3C Recommendation (Jan 2008)

Reynolds, D.: OWL 2 RL in RIF, W3C Working Group Note (June 2010)

Robinson, A., Voronkov, A. (eds.): Handbook of Automated Reasoning. Elsevier Science, Amsterdam (2001)

Russell, S., Norvig, P.: Artificial Intelligence – A Modern Approach, 2nd edn. Prentice Hall, New Jersey (2003)

Schreiber, G., Akkermans, H., Anjewierden, A., de Hoog, R., Shadbolt, N., Van de Velde, W., Wielinga, B.: Knowledge Engineering and Management: The Common KADS Methodology. MIT Press, Cambridge (2000)

Tomasello, M.: Origins of Human Communication. MIT Press, Cambridge (2008)

http://www.bbc.co.uk/blogs/bbcinternet/2010/07/bbc_world_cup_2010_dynamic_sem.html . Accessed 6 Sept 2010

http://www.bbc.co.uk/blogs/bbcinternet/2010/07/the_world_cup_and_a_call_to_ac.html . Accessed 6 Sept 2010

http://www.businessinsider.com/apple-buys-siri-a-mobile-assistant-app-as-war-with-google-heats-up-2010-4 . Accessed 6 Sept 2010

http://challenge.semanticweb.org/ . Accessed 6 Sept 2010

http://www.comscore.com/Press_Events/Press_Releases/2010/1/Global_Search_Market_Grows_46_Percent_in_2009 . Accessed 6 Sept 2010

http://www.dagstuhl.de/en/program/calendar/semhp/?semnr = 00121 . Accessed 6 Sept 2010

http://developers.facebook.com/docs/opengraph . Accessed 6 Sept 2010

http://dictionary.reference.com/browse/meaning . Accessed 6 Sept 2010

http://dublincore.org/ . Accessed 6 Sept 2010

http://www.everyhit.com . Accessed 6 Sept 2010

http://www.fipa.org/ . Accessed 6 Sept 2010

http://www.foaf-project.org/ . Accessed 6 Sept 2010

http://googleblog.blogspot.com/2010/07/deeper-understanding-with-metaweb.html . Accessed 6 Sept 2010

http://www.ist-world.org/ProjectDetails.aspx?ProjectId=e132f5b74a41456f95611eb7ad3abfd3 . Accessed 6 Sept 2010

http://knowledgeweb.semanticweb.org/ . Accessed 6 Sept 2010

http://ksi.cpsc.ucalgary.ca/KAW/ . Accessed 6 Sept 2010

http://www2.labour.org.uk/gordon-browns-speech-on-building-britains-digital-future,2010-03-26 . Accessed 6 Sept 2010

http://www.larkc.eu/ . Accessed 6 Sept 2010

http://microformats.org/ . Accessed 6 Sept 2010

http://www.openlinksw.com/weblog/oerling/?id=1614 . Accessed 6 Sept 2010

http://www.oracle.com/technetwork/database/options/semantic-tech/index.html . Accessed 6 Sept 2010

http://www.pandia.com/sew/383-web-size.html . Accessed 6 Sept 2010

http://store.levi.com/ . Accessed 6 Sept 2010

http://tech.fortune.cnn.com/2010/07/29/google-the-search-party-is-over/ . Accessed 6 Sept 2010

http://techcrunch.com/2010/04/21/facebook-like-button/ . Accessed 6 Sept 2010

http://techcrunch.com/2010/08/17/when-wrong-call-yourself-prescient-instead/ . Accessed 6 Sept 2010

http://technology.timesonline.co.uk/tol/news/tech_and_web/the_web/article7104354.ece . Accessed 6 Sept 2010

http://www.wired.com/magazine/2010/08/ff_webrip/all/1

http://WordNet.princeton.edu/ . Accessed 6 Sept 2010

Wikipedia: Controlled vocabulary. http://en.wikipedia.org/wiki/Controlled_vocabulary (2010). Accessed 6 Sept 2010

Wikipedia: Energy. http://en.wikipedia.org/wiki/Energy (2010). Accessed 6 Sept 2010

Wikipedia: Equipment. http://en.wikipedia.org/wiki/Equipment (2010). Accessed 6 Sept 2010

Wikipedia: Formal_semanics. http://en.wikipedia.org/wiki/Formal_semantics (2010). Accessed 6 Sept 2010

Wikipedia: Idea. http://en.wikipedia.org/wiki/Idea (2010). Accessed 6 Sept 2010

Wikipedia:. Intention. http://en.wikipedia.org/wiki/Intention (2010). Accessed 6 Sept 2010

Wikipedia: Machine. http://en.wikipedia.org/wiki/Machine (2010). Accessed 6 Sept 2010

Wikipedia: NLS (computer system). http://en.wikipedia.org/wiki/NLS_%28computer_system%29 (2010). Accessed 6 Sept 2010

Wikipedia: OSI model. http://en.wikipedia.org/wiki/OSI_model (2010). Accessed 6 Sept 2010

Wikipedia: Purpose. http://en.wikipedia.org/wiki/Purpose (2010). Accessed 6 Sept 2010

Wikipedia: Second-order logic. http://en.wikipedia.org/wiki/Second-order_logic (2010). Accessed 6 Sept 2010

Wikipedia: Semantic HTML. http://en.wikipedia.org/wiki/Semantic_HTML (2010). Accessed 6 Sept 2010

Wikipedia: Semantics. http://en.wikipedia.org/wiki/Semantics (2010). Accessed 6 Sept 2010

Wikipedia: SLD_resolution. http://en.wikipedia.org/wiki/SLD_resolution (2010). Accessed 6 Sept 2010

Wikipedia:. SQL. http://en.wikipedia.org/wiki/SQL (2010). Accessed 6 Sept 2010

Wikipedia: Tag cloud. http://en.wikipedia.org/wiki/Tag_cloud (2010). Accessed 6 Sept 2010

Wikipedia: Taxonomies. http://en.wikipedia.org/wiki/Taxonomies (2010). Accessed 6 Sept 2010

Wikipedia: Tower of Babel. http://en.wikipedia.org/wiki/Tower_of_Babel (2010). Accessed 6 Sept 2010

Wiktionary: Device. http://en.wiktionary.org/wiki/device (2010). Accessed 6 Sept 2010

World Wide Web Consortium: OWL web ontology language reference, W3C Recommendation. http://www.w3.org/TR/owl-ref/ (Feb 2004). Accessed 6 Sept 2010

World Wide Web Consortium: RDB2RDF working group. http://www.w3.org/2001/sw/rdb2rdf/ . Accessed 6 Sept 2010

World Wide Web Consortium: RDF primer, W3C Recommendation. http://www.w3.org/TR/rdf-primer/ . Accessed 6 Sept 2010

World Wide Web Consortium: The global structure of an HTML document, W3C Recommendation. http://www.w3.org/TR/html401/struct/global.html#edef-META . Accessed 6 Sept 2010

World Wide Web Consortium: XML technology, W3C Standard. http://www.w3.org/standards/xml/ . Accessed 6 Sept 2010

Yeates, G.: Earthworms. Te Ara – the encyclopedia of New Zealand (updated 1 March 2009). http://www.teara.govt.nz/en/earthworms/3/1 (2009). Accessed 6 Sept 2010

Download references

Acknowledgments

We thank Ian Horrocks and Michael Kifer from preventing mistakes in the sections of the chapter related to logic. We also thank Neil Benn for his help in the final formatting stages.

Author information

Authors and affiliations.

Knowledge Media Institute, The Open University, Walton Hall, MK7 6AA, Milton Keynes, UK

John Domingue

STI Innsbruck, University of Innsbruck, Technikerstraße 21a, Innsbruck, 6020, Austria

Dieter Fensel

Department of Computer Science, Rensselaer Polytechnic Institute, 12180, Troy, NY, USA

James A. Hendler

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to John Domingue .

Editor information

Editors and affiliations.

Knowledge Media Institute, The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK

STI Innsbruck, University of Innsbruck, Technikerstraße 21a, 6020, Innsbruck, Austria

Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this entry

Cite this entry.

Domingue, J., Fensel, D., Hendler, J.A. (2011). Introduction to the Semantic Web Technologies. In: Domingue, J., Fensel, D., Hendler, J.A. (eds) Handbook of Semantic Web Technologies. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-92913-0_1

Download citation

DOI : https://doi.org/10.1007/978-3-540-92913-0_1

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-540-92912-3

Online ISBN : 978-3-540-92913-0

eBook Packages : Computer Science Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Altair Acquires Cambridge Semantics, Powering Next-Generation Enterprise Data Fabrics and Generative AI. Read More

  • Semantic University
  • Case Studies
  • White Papers
  • Towards the Semantic Web
  • Introduction to Linked Data
  • What is Linked Data
  • Semantic Web Landscape
  • The Many Names of the Semantic Web
  • Semantic Web Misconceptions

Getting Started with Semantic Technologies

If you’re brand new to semantic technologies, the concept can be very overwhelming. Different sites and people will talk about everything from artificial intelligence to natural language processing to linked data and the Semantic Web. What are they all? How do they relate to each other? How do they relate to you?

This set of lessons aims to ground you in the basics. The lessons give you basic definitions and goals that form the foundation of a solid, unconfused understanding of semantics.

Introduction to the Semantic Web

The Semantic Web, Web 3.0, the Linked Data Web, the Web of Data…whatever you call it…represents the next major evolution in connecting and representing information. It enables data to be linked from a source to any other source and to be understood by computers so that they can perform increasingly sophisticated tasks on our behalf.

This lesson introduces the Semantic Web, putting it in the context of both the evolution of the World Wide Web as we know it today as well as data management in general, particularly in large enterprises.

Course Objectives

After completing this lesson, you will know:

  • How Semantic Web technology fits into the past, present, and future evolution of the Internet.
  • How Semantic Web technology differs from existing data management technologies, such as relational databases and the current state of the World Wide Web.
  • The three primary international standards that help encode the Semantic Web.

The World Wide Web was invented by Sir Tim Berners-Lee in 1989, a surprisingly short time ago. The key technology of the original Web—from an end user’s point of view, anyway—was the hyperlink. A user could click on a link and immediately (well, back then, almost immediately) go to the document identified in that link.

In summary, the great advantage of Web 1.0 was that it abstracted away the physical storage and networking layers involved in information exchange between two machines. This breakthrough enabled documents to appear to be directly connected to one another. Click a link and you’re there—even if that link goes to a different document on a different machine on another network on another continent!

In the same way that Web 1.0 abstracted away the network and physical layers, the Semantic Web abstracts away the document and application layers involved in the exchange of information. The Semantic Web connects facts, so that rather than linking to a specific document or application, you can instead refer to a specific piece of information contained in that document or application. If that information is ever updated, you can automatically take advantage of the update.

This may appear at first to be a very subtle advantage, but it is one that will be illustrated in detail in the various lessons here at Semantic University.

Today’s Lesson

How is the “semantic web” different.

The word “semantic” implies meaning or understanding. As such, the fundamental difference between Semantic Web technologies and other technologies related to data (such as relational databases or the World Wide Web itself) is that the Semantic Web is concerned with the meaning and not the structure of data. Note: Other semantic technologies include Natural Language Processing (NLP) and Semantic Search. We will compare these technologies in separate lessons.

This fundamental difference engenders a completely different outlook on how storing, querying, and displaying information might be approached. Some applications, such as those that refer to a large amount of data from many different sources, benefit enormously from this feature. Others, such as the storage of high volumes of highly structured transactional data, do not. Understanding when it is a good idea and when it is not a good idea to apply Semantic Web technologies is one of the primary objectives of the Semantic University. These topics are addressed in more detail in future lessons.

What Standards Apply to the Semantic Web?

From a technical point of view, the Semantic Web consists primarily of three technical standards:

  • RDF (Resource Description Framework): The data modeling language for the Semantic Web. All Semantic Web information is stored and represented in the RDF.
  • SPARQL (SPARQL Protocol and RDF Query Language): The query language of the Semantic Web. It is specifically designed to query data across various systems.
  • OWL (Web Ontology Language): The schema language, or knowledge representation (KR) language, of the Semantic Web. OWL enables you to define concepts composably so that these concepts can be reused as much and as often as possible. Composability means that each concept is carefully defined so that it can be selected and assembled in various combinations with other concepts as needed for many different applications and purposes.

One way to differentiate a Semantic Web application vs. any other application is the usage of those three technologies. However, the Semantic Web has been called many things, such as Web 3.0 or the Linked Data Web. Some of these names carry great significance, even with regard to the technology stack, so we’ll cover this topic in a separate lesson.

A contemporary expression of Semantic Web technology is the “knowledge graph.” Over the years, the Semantic Web vision has been hindered for a number of reasons, including misguided applications, lack of scale and perceived complexity. The knowledge graph construct has emerged to help developers and decision makers to more tightly constrain the development and application of the Semantic Web standards. 

Tools and techniques have matured such that enterprise scope and scale knowledge graph applications are feasible and ready for mainstream use. Similar to the growth of Web 1.0, knowledge graphs are forming the Semantic Web — sometimes referred to as the Machine Web — one knowledge graph at time.

Semantic Web technologies as a whole have made tremendous strides in the last decade. Some highlights include:

  • The Open Linked Data movement has grown massively every single year and contains far more information than any single resource anywhere on the Web.
  • Large organizations—such as Merck, Johnson & Johnson, Chevron, Staples, GE, the US Department of Defense, NASA, and others—now rely on Semantic Web technologies to run critical daily operations.
  • The Semantic Web standards—RDF, SPARQL, OWL, and others—were merely drafts in 2001, but they have now been formalized and ratified.

Truly, an entire industry has been born in the past ten years, complete with multiple trade shows on several continents, a growing user community, and active standards bodies.

That said, significant room for growth still can be found.

  • Despite recent huge strides on the part of Schema.org, Facebook’s Open Graph, and others, the vision of an entire Web of interoperable data has still not yet been realized.
  • Notwithstanding significant early corporate adoption by a select few frontrunners, most companies have not yet started using (or are even unaware of the existence of) Semantic Web technologies.
  • The learning curve for using Semantic Web technologies is perceived to be steep because few educational resources currently exist for users new to the concepts, and still fewer resources can be found that discuss when and how to apply the technologies to real world scenarios.

Here at Semantic University, we’re focusing on that last point.

semantic web thesis topics

The Data Fabric Journey – Your Digital Transformation Roadmap

semantic web thesis topics

Knowledge Graph Ebook

semantic web thesis topics

An Integrated Data Enterprise Blog Series

semantic web thesis topics

SPARQL - One Standard to Rule Them All

semantic web thesis topics

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Clin Pract
  • v.2022; 2022

Logo of intjclinprac

Semantic Web in Healthcare: A Systematic Literature Review of Application, Research Gap, and Future Research Avenues

A. k. m. bahalul haque.

1 Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh

B. M. Arifuzzaman

Sayed abu noman siddik, tabassum sadia shahjahan, t. s. saleena.

2 PG & Research Department of Computer Science, Sullamussalam Science College Areekode, Malappuram, Kerala 673639, India

Morshed Alam

3 Institute of Education and Research, Jagannath University, Dhaka 1100, Bangladesh

Md. Rabiul Islam

4 Department of Pharmacy, University of Asia Pacific, 74/A Green Road, Farmgate, Dhaka 1205, Bangladesh

Foyez Ahmmed

5 Department of Statistics, Comilla University, Kotbari, Cumilla, Bangladesh

Md. Jamal Hossain

6 Department of Pharmacy, State University of Bangladesh, 77 Satmasjid Road, Dhanmondi, Dhaka 1205, Bangladesh

Associated Data

The data used to support the findings of this study are included within the article.

Today, healthcare has become one of the largest and most fast-paced industries due to the rapid development of digital healthcare technologies. The fundamental thing to enhance healthcare services is communicating and linking massive volumes of available healthcare data. However, the key challenge in reaching this ambitious goal is letting the information exchange across heterogeneous sources and methods as well as establishing efficient tools and techniques. Semantic Web (SW) technology can help to tackle these problems. They can enhance knowledge exchange, information management, data interoperability, and decision support in healthcare systems. They can also be utilized to create various e-healthcare systems that aid medical practitioners in making decisions and provide patients with crucial medical information and automated hospital services. This systematic literature review (SLR) on SW in healthcare systems aims to assess and critique previous findings while adhering to appropriate research procedures. We looked at 65 papers and came up with five themes: e-service, disease, information management, frontier technology, and regulatory conditions. In each thematic research area, we presented the contributions of previous literature. We emphasized the topic by responding to five specific research questions. We have finished the SLR study by identifying research gaps and establishing future research goals that will help to minimize the difficulty of adopting SW in healthcare systems and provide new approaches for SW-based medical systems' progress.

1. Introduction

The detection and remedy of illnesses through medical professionals are expressed as healthcare. The healthcare system consists of medical practitioners, researchers, and technologists that work together to provide affordable and quality healthcare services. They tend to generate considerable amounts of data from heterogeneous sources to enhance diagnostic accuracy, elevate quick treatment decisions, and pave the way for the effective distribution of information between medical practitioners and patients. However, it is necessary to organize those valuable data appropriately so that they can fetch those, while required.

One of the main challenges in utilizing medical healthcare data is extracting knowledge from heterogeneous data sources. The interoperability of well-being and clinical information poses tremendous obstacles due to data irregularity and inconsistency in structure and organization [ 1 , 2 ]. This is also because data are stored in various authoritative areas, making it challenging to retrieve knowledge and authorize a primary route along with information analysis. The information from a hospital can prove to be very useful in healthcare if these data are shared, analyzed, integrated, and managed regularly. Again, platforms that provide healthcare services also face dilemmas in automating time-efficient and low-cost web service arrangements [ 3 ]. This indicates that meaningful healthcare solutions must be proposed and implemented to provide extensive functionality based on electronic health record (EHR) workflows and data flow to enable scalable and interoperable systems [ 4 ], such as a blockchain-based smart e-health system that provides patients with an easy-to-access electronic health record system through a distributed ledger containing records of all occurrences [ 5 – 8 ]. A standard-based and scalable semantic interoperability framework is required to integrate patient care and clinical research domains [ 9 ]. The increasing number of knowledge grounds, heterogeneity of schema representation, and lack of conceptual description make the processing of these knowledge bases complicated. Non-experts find mixing knowledge with patient databases challenging to facilitate data sharing [ 10 ]. Similarly, ensuring the certainty of disease diagnosis also becomes a more significant challenge for health providers. Brashers et al. [ 11 ] in their work examined the significance of credible authority and the level of confidence HIV patients have in their medical professionals. Many participants agreed that doctors might not be fully informed of their ailment, but they emphasized the value of a strong patient-physician bond. With the help of big data management techniques, these challenges can be minimized. Likewise, Crowd HEALTH aims to establish a new paradigm of holistic health records (HHRs) that incorporate all factors defining health status by facilitating individual illness prevention and health promotion through the provision of collective knowledge and intelligence [ 11 – 13 ]. Another similar approach is adopted by the beHealthier platform which constructs health policies out of collective knowledge by using a newly proposed type of electronic health records (i.e., eXtended Health Records (XHRs)) and analysis of ingested healthcare data [ 14 ]. Making healthcare decisions during the diagnosis of a disease is a complex undertaking. Clinicians combine their subjectivity with experimental and research artifacts to make diagnostic decisions [ 9 ].

In recent years, Web 2.0 technologies have significantly changed the healthcare domain. However, in proportion to the growing trend of being able to access data from anywhere, which is primarily driven by the widespread use of smartphones, computers, and cloud applications, it is no longer sufficient. To address such challenges, Semantic Web Technologies have been adopted over time to facilitate efficient sharing of medical knowledge and establish a unified healthcare system. Tim Berners-Lee, also known as the father of the web, first introduced Semantic Web (SW) in 1989 [ 15 ]. The term “Semantic Web” refers to linked data formed by combining information with intelligent content. SW is an extension of the World Wide Web (WWW) and provides technologies for human agents and machines to understand web page contents, metadata, and other information objects. It also provides a framework for any kind of content, such as web pages, text documents, videos, speech files, and so on. The linked data comprise technologies such as Resource Description Framework (RDF), Web Ontology Language (OWL), SPARQL, and SKOS. It aims to create an intelligent, flexible, and personalized environment that influences various sectors and professions, including the healthcare system.

Data interoperability can only be improved when the semantics of the content are well defined across heterogeneous data sources. Ontology is one of the semantic tools, which is frequently used to support interoperability and communication between software, communities, and healthcare organizations [ 16 , 17 ]. It is also commonly used to personalize a patient's environment. Kumari et al. [ 18 ] and Haque et al. [ 19 ] proposed an Android-based personalized healthcare monitoring and appointment application that considers the health parameters such as body temperature, blood pressure, and so on to keep track of the patient's health and provide in-home medical services. Some existing ontologies of medicine are Gene, NCI, GALEN,LinkBase, and UMLS [ 20 ]. They have also been used in offering e-healthcare systems based on GPS tracking and user queries. Osama et al. proposed two ontologies for a medical differential diagnosis: disease symptom ontology (DSO) and patient ontology (PO) [ 21 ]. Sreekanth et al. used semantic interoperability to propose an application that brings together different actors in the health insurance sector [ 22 ]. Semantic Web not only enables information system interoperability but also addresses some of the most challenging issues with automated healthcare web service settings. SW combined with AI, IoT, and other technologies has produced a smart healthcare system that enables the standardization and depiction of medical data [ 1 , 23 , 24 ]. In terms of economic efficiency, the Semantic Web-Based Healthcare Framework (SWBHF) is said to benchmark the existing BioMedLib Search Engine [ 25 ]. SW also offered a new user-oriented dataset information resource (DIR) to boost dataset knowledge and health informatics [ 26 ]. This technology is also used in the rigorous registration process to discover, classify, and composite web services for the service owner [ 4 ]. To provide answers to medical questions, it has been integrated with NLP to create RDF datasets and attach them with source text [ 27 ]. Babylon Health, which enables doctors to prescribe medications to patients using mobile applications, has benefited from the spread of semantic technology. Archetypes, ontology, and datasets have been used in web-based methods for diagnosing colorectal cancer screening. Clinical information and knowledge about disease diagnosis are encoded for decision making with the use of ontological understanding and probabilistic reasoning. The integration of pharmaceutical and medical knowledge, as well as IoT-enabled smart cities, has made extensive use of SW technologies [ 8 ]. To put it briefly, this emerging technology has revolutionized the healthcare and medical system.

Despite its relevance, researchers who looked into the benefits of SW efforts showed substantial deficiencies in the wide range of semantic information in the medical and healthcare sectors. To the best of our knowledge, no previous systematic literature review (SLR) has been published on the Semantic Web and none of the research has previously classified the precise application area in which SW can be applied. Furthermore, there was an absence of research questions in the previous literature for analyzing and comparing similar works in order to understand their flaws, strengths, and problems.

In this study, we present a systematic review of the literature on Semantic Web in healthcare, with an emphasis on its application domain. It is absolutely essential to point the SW user community in the right direction for future research, to broaden knowledge on research topics, and to determine which domains of study are essential and must be performed. Thus, the current SLR can help researchers by addressing a number of factors that either limit or encourage medical and healthcare industries to employ Semantic Web technologies. Furthermore, the study also identifies various gaps in the existing literature and suggests future research directions to help resolve them. The research questions (RQs) that this systematic review will seek to answer are as follows. ( RQ1 ) What is the research profile of existing literature on the Semantic Web in the healthcare context? ( RQ2 ) What are the primary objectives of using the Semantic Web, and what are the major areas of medical and healthcare where Semantic Web technologies are adopted? ( RQ3 ) Which Semantic Web technologies are used in the literature, and what are the familiar technologies considered by each solution? ( RQ4 ) What are the evaluating procedures used to assess the efficiency of each solution? ( RQ5 ) What are the research gaps and limitations of the prior literature, and what future research avenues can be derived to advance Web 3.0 or Semantic Web technology in medical and healthcare?

This research contributes in a number of ways. This paper's main focus is centered on the collection of some statistical data and analysis results that are mostly focused on the adoption of SW technologies in the medical and healthcare fields. First, we gathered data from five publishers, including Scopus, IEEE Xplore Digital Library, ACM Digital Library, and Semantic Scholar, to thoroughly review, analyze, and synthesize past research findings. Furthermore, the current study does not focus on a specific theme, rather, it offers a broad overview of all possible research themes related to the use of SW in healthcare. Finally, this SLR identifies gaps in the existing literature and suggests a future research agenda. The primary contributions of our study are listed as follows:

  • To find out the up-to-date research progress of SW technology in medical and healthcare.
  • To open up new technical fields in healthcare where SW technologies can be used.
  • To identify all the constraints in the healthcare industry during the adoption of SW technologies.
  • To identify key future trends for semantics in the healthcare sector.
  • To analyze and investigate alternative strategies for ensuring semantic interoperability in the healthcare contexts.

This review paper is organized as follows. Section 1 introduces Semantic Web technologies in healthcare followed by Section 2 which describes the methodology followed, the inclusion/exclusion criteria, and the data extracted and analyzed in this literature review paper. Section 3 elaborately discusses different thematic areas, and Section 4 presents the research gaps to address future research agendas. Section 5 presents a detailed discussion of the specified RQs. Lastly, Section 6 consists of the conclusion for this SLR.

2. Methodology

A systematic review is a research study that looks at many publications to answer a specific research topic. This study follows such a review to examine previous research studies that include identifying, analyzing, and interpreting all accessible information relevant to the recent progress of pertinent literature on Web 3.0 or Semantic Web in medical and healthcare or our phenomenon of interest. In the advancement of medical and healthcare analysis, numerous SLRs have been undertaken with inductive methodologies to identify major themes where Semantic Web technologies are being adopted [ 28 , 29 ]. In our study, we adopted the procedures outlined by Keele with a few important distinctions to assure the study's transferability, dependability, and transparency, emphasizing and documenting the selection method [ 30 ]. The guidelines outlined in that paper were derived from three existing approaches used by medical researchers, two books written by social science researchers, and a discussion with other academics interested in evidence-based practice [ 8 , 31 – 40 ]. The guidelines have been modified to include medical policies in order to address the unique challenges of software engineering research.

Our study sequentially conducted an SLR to accomplish the precise objectives. At first, we planned the necessary approach to identify the problems. Next, we collected related study materials and retrieved data from them. Finally, we documented the findings and carried out the research in the following steps (see Figure 1 ) maintaining its replicability as well as precision.

  • Step 1 . Plan the review by finding appropriate research measures to detect corresponding documents.
  • Step 2 . Collect analyses by outlining the inclusion and exclusion criteria to assess their applicability.
  • Step 3 . Extract relevant data using numerous screening approaches to use accordingly.
  • Step 4 . Document the research findings.

An external file that holds a picture, illustration, etc.
Object name is IJCLP2022-6807484.001.jpg

SLR methodology and protocols.

2.1. Planning the Review

The very first stage in conducting SLR is to identify the needs for a specific systematic review, outline the research questions, design a procedural review, and offer a study framework to assist the investigation in subsequent phases to identify the systematic review's significant objectives. This phase begins with the identification of needs for the proposed systematic review. Section 1 of this paper went into detail about why a systematic review of Semantic Web technologies in healthcare was deemed necessary. Following that, the definition of research questions, the selection of a synthesis method, initial keywords, and databases are given. To begin, we devised the RQs for this SLR in order to gain a comprehensive understanding of the semantic-based solutions in the field of healthcare. Defining research questions is an important part of conducting a systematic review because they guide the overall review methodology. Based on the objectives, we conducted a pilot study of a systematic review of fifteen sample studies, resulting in the broad application of the Semantic Web to a specific niche, refinement of research questions, and redefinition of the review research protocol. To find relevant scientific contributions for our RQs, we used Scopus, IEEE Xplore Digital Library, ACM Digital Library, and Semantic Scholar. Furthermore, we utilized the primary term “Web 3.0 or Semantic Web” to search the databases and then identified and refined the comprehensive keywords that would be used as search strings. We did not limit our search to a single period instead; we looked at all linked studies.

2.2. Collecting Analyses

A systematic review's unit of analysis is crucial since it broadens the scope of the overall approach. This study aims to better understand how Web 3.0 or Semantic Web technologies are employed in medical and healthcare settings, as well as to identify the extent to which they have been applied. We have selected academic research articles and journals as the unit of analysis for our SLR. We specified inclusion and exclusion criteria to narrow the investigation in the following study selection process, as shown in Table 1 . To gather our search phrases, we used a nine-step procedure as mentioned in [ 41 ]. The studies obtained from online repositories were compared with exclusion criteria to select peer-reviewed papers and eliminate any non-peer-reviewed studies. To perform this review, we employed decisive exclusion criteria to identify grey literature, which included white papers, theses, project reports, and working papers. To remove language barriers, we only selected papers written in English. We did not consider any review papers or project reports to maintain the quality. Older publications that have never been cited were excluded from the review to explore the potential value of Web 3.0 and SW technologies in medical and healthcare.

Criteria for inclusion and exclusion.

Inclusion criteria (IC)Exclusion criteria (EC)
( ) Primary studies
( ) Peer-reviewed publications
( ) The studies are written in English language
( ) The research must be based on empirical evidence (qualitative and quantitative research)
( ) Journal articles published through January 22, 2022
( ) Studies available in full text
( ) Studies that focus on the Semantic Web to support medical and healthcare
( ) Any published study that has the potential to address at least one research question
( ) Studies not written in English
( ) White papers, working papers, positional papers, review papers, short papers (<4 pages), and project reports.
( ) Theses, editorials, keynotes, forum conversations, posters, editorials, analysis, tutorial overviews, technological articles, and essays.
( ) Grey literature, i.e., editorial, abstract, keynote, and studies without bibliographic information, e.g., publication date/type, volume, and issue number
( ) Research does not focus on the SW to support medical and healthcare

2.3. Extracting Relevant Data

Initially, we searched for papers in Google Scholar with “Web 3.0 in medical and healthcare” keywords. However, reviewing the title and abstract from the top 50 articles further improved the search keyword to develop a more appropriate search string. The top search string (“Semantic Web” OR “Web 3.0”) AND (“Healthcare” OR “medical”) was used in Scopus, IEEE Xplore Digital Library, ACM Digital Library, and Semantic Scholar to find related papers for our SLR on 22 January 2022. We found a total of 4137 papers, including 2237 from IEEE Xplore Digital Library, 1761 from Scopus, 103 from Semantic Scholar, and 36 from ACM Digital library. Primary review grasped articles up to 2001. So, all the identified publications were from 2001 to 2021. Four authors performed the screening method through different stages. After each step, a discussion session was held to finalize the step and move further.

At first, we checked for any duplicate articles from both indexing databases. We eliminated available duplicate articles by checking the Digital Object Identifier (DOI) and the research heading. After removing the duplicate articles, we were left with 1923 articles. After that, titles, keywords, and abstracts were read as part of the preliminary screening process. During the screening procedure, articles were divided into three categories: retain, exclude, and suspect. After removing articles unrelated to Web 3.0 or Semantic Web in medical and healthcare, only 1741 articles were retrained. Upon analyzing the contents of both suspect and retain studies using the inclusion and exclusion criteria listed in Table 1 , we were left with 343 publications. Following that, we read the full text of the articles that were picked, and we were left with 54 papers being considered for our conclusive stage. Finally, we applied the snowballing strategy, also known as the citation chaining technique [ 42 ]. Surprisingly, this step resulted in the addition of another ten studies (7 from backward citation and three from forwarding citation). The final review pool thus comprised 65 papers being considered for our conclusive stage ( Figure 2 depicts the study selection process in detail).

An external file that holds a picture, illustration, etc.
Object name is IJCLP2022-6807484.002.jpg

Study selection process.

2.4. Document Research Findings

The shortlisted research papers were profiled using descriptive statistics, which include publication year, methodology, and publication sources [ 23 , 43 , 44 ]. According to the chronology of the number of publications, the majority of the research articles were published in 2013. However, between 2018 and 2021, the number declined. Figure 3 depicts the yearly (between 2001 and 2022) distribution of published papers.

The majority of the studies presented a framework for developing a medical data information management system. Web 3.0 technologies appear to be in their early phases of adoption, with scholars only recently becoming interested in the topic. A few other papers discussed medical data interchange mechanisms, diseases, frontier technology such as AI and NLP, and regulatory conditions. Nearly half of the research ( n  = 39) was published between 2001 and 2012, with the remaining studies ( n  = 26) published after that (see Figure 3 ). The Semantic Web theory gained widespread interest after the architect of the World Wide Web, Tim Berners-Lee, James Hendler, and Ora Lassila popularized it in a Scientific American article in May 2001 [ 15 ]. This trend also gained momentum in recent years, with John Markoff coining the term Web 3.0 in 2006 and Gavin Wood, Ethereum's co-founder, coining the word later in 2014.

An external file that holds a picture, illustration, etc.
Object name is IJCLP2022-6807484.003.jpg

Number of articles published yearly.

Medical and healthcare writings have been published in several renowned conferences, journals, book series, and events. The 65 shortlisted papers are distributed throughout 27 conference proceedings, 21 journals, and 17 book series. The descriptive analysis depicts that 65 shortlisted analyses were authored by 25 publishers, accompanied by Springer ( n  = 17), IEEE Xplore ( n  = 15), IOS Press ( n  = 6), ACM ( n  = 5), and Elsevier ( n  = 3). Only a few publishers published many studies. The reset included 15 publishers, each of whom only published one study. However, the majority of the papers were published in Lecture Notes in Computer Science (LNCS), CEUR Workshop Proceedings, and Studies in Health Technology and Informatics Series (see Figure 4 ). Furthermore, our SLR demonstrates the wide geographic span of existing research papers. The United States (11 articles), France (23 articles), India (9 articles), Canada (8 articles), Belgium (4 articles), and South Korea (4 articles) all had a significant number of studies. Figure 5 summarizes the past literature's geographical distribution.

An external file that holds a picture, illustration, etc.
Object name is IJCLP2022-6807484.004.jpg

Publication-source-wise distribution.

An external file that holds a picture, illustration, etc.
Object name is IJCLP2022-6807484.005.jpg

Country-wise article distribution.

According to the systematic literature review, the application of Semantic Web technologies in the field of healthcare is a prominent classical research theme, with many innovative and promising research topics. The number of Semantic Web publications and interest in healthcare has increased rapidly in recent years, and Semantic Web methods, tools, and languages are being used to solve the complex problems that today's healthcare industries face. Semantic Web technology allows comprehensive knowledge management and sharing, as well as semantic interoperability across application, enterprise, and community boundaries. This makes the Semantic Web a viable option for improving healthcare services by improving tasks such as standards and interoperable rich semantic metadata for complex systems, representing patient records, investigating the integration of Internet of Things and artificial computational methods in disease identification, and outlining SW-based security. While there are interesting possibilities for the application of Semantic Web technologies in the healthcare setting, some limitations may explain why those possibilities are less apparent. We believe one reason is a lack of support for developers and researchers. Semantic Web-based healthcare applications should be viewed as independent research prototypes that must be implemented in real-world scenarios rather than as a widget that is integrated with the Web 2.0-based solution. This study discusses the findings and future directions from two different perspectives. First, consider the potential applications of Semantic Web technologies in different healthcare scenarios and also look at the barriers to their practical application and how to overcome them (see Section 3 ). Last, the fourth (see Section 4 ) section discusses the scope of research in Semantic Web-enabled healthcare.

3. Analysis of the Selected Articles: Thematic Areas

This section focuses on three key steps: summarizing, comparing, and discussing the shortlisted papers to describe and categorize them into common themes. To systematically analyze all 65 studies, we adopted the technique used in recently published SLRs [ 23 , 43 ]. After identifying and selecting relevant papers that could answer our research questions, we used the content analysis technique to classify, code, and synthesize the findings of those studies. A three-step approach was proposed by Erika Hayes et al., which was used to interpret unambiguous and unbiased meaning from the content of text data [ 45 ]. The steps were as follows: (a) the authors assigned categories to each study and a coding scheme created directly and inductively from raw data using valid reasoning and interpretation; (b) the authors immersed themselves in the material and allowed themes to arise from the data to validate or extend categories and coding schemes using directed content analysis; (c) the authors used summative content analysis, which begins with manifesting content and then expands to identify hidden meanings and themes in the research areas.

This thematic analysis answers the second research question (RQ2), “What are the primary objectives of using the Semantic Web, and what are the major areas of medical and healthcare where Semantic Web technologies are adopted?”, and this analysis architecture highlights five broad medical and healthcare-related research themes based on their primary contribution (see Table 2 ), notably e-healthcare service, diseases, information management, frontier technology, and regulatory conditions.

Derived themes and their descriptions.

Theme description
E-healthcare services are defined as healthcare services and resources that are improved or supplied over the Internet and other associated technologies to reduce the burden on the patients.
Diseases include a wide range of illnesses, including dementia, diabetes, chronic disorders, cardiovascular disease, and critical limb ischemia. The objective is to use SWT to integrate medical information and data from various electronic health data sources for efficient diagnosis and clinical services.
Information management in healthcare is the process of gathering, evaluating, and preserving medical data required for providing high-quality healthcare management systems. In this thematic area, we discuss how SWT can be used to develop the management of massive healthcare data.
In a broad sense, frontier technology in healthcare refers to technologies such as artificial intelligence, various spectrum of IoT, augmented reality, and genomics that are pushing the boundaries of technological capabilities and adoption. Only the works of scholars who collaborated with Semantic Web and frontier technologies to meet healthcare demand are included in this category.
Regulatory conditions refer to the activities that aim to develop adequate underlying motives and beliefs, guidelines, and healthcare protocols across healthcare facilities and systems. This section's research focuses on the improvement of good practice and clinical norms using SWT for documenting the semantics of medical and healthcare data and resources.

Two themes, namely, IoT and cloud computing, were nevertheless left out since they lack a wide description that would be useful in developing a meaningful theme. Some of the papers from which we defined these two thematic areas were included in the selected themes based on their similarity to the chosen thematic areas. Figure 6 illustrates this categorization, with different themes' description, which emerged from our review.

An external file that holds a picture, illustration, etc.
Object name is IJCLP2022-6807484.006.jpg

Thematic description of Semantic Web approaches in medical and healthcare.

3.1. E-Healthcare Service

The use of various technologies to provide healthcare support is known as e-service in healthcare or e-healthcare service. While staying at home, a person can obtain all the necessary medical information as well as a variety of healthcare services such as disease reasoning, medication, and recommendation through e-healthcare services. It is similar to a door-to-door service. The Semantic Web or Web 3.0 plays a critical role in this regard. The Semantic Web offers a variety of technologies, including semantic interoperability, semantic reasoning, and morphological variation that can be used to create a variety of frameworks that improve e-healthcare services.

SW makes the task of sharing medical information among healthcare experts more efficient and easier [ 2 , 46 – 48 ]. A dataset information resource for medical knowledge makes the work more trouble-free and faster. A healthcare dataset information resource has been created along with a question-answering module related to health information [ 26 ]. Combining different databases can be more effective as it expands the information range of knowledge. In this respect, Barisevičius et al. [ 49 ] designed a medical linked data graph that combines different medical databases and they also developed a chatbot using NLP-based knowledge extraction that provides healthcare services by supplying knowledge about various medical information. Besides information sharing and database combining, Semantic Web-based frameworks can provide virtual medical and hospital-based services. A system has been created that provides medical health planning according to patient's information [ 50 ]. Concerning this, it could be very helpful if there is a system that can match patient requirements with the services. Such a matchmaking system has been developed to match the web services with the patient's requirements for medical appointments [ 51 ]. To provide hospital-based services, a Semantic Web-based dynamic healthcare system was developed using ontologies [ 17 ]. Disease reasoning is a vital task for e-healthcare services. A number of frameworks were developed that are used for reasoning diseases [ 49 , 52 , 53 ]. In addition, some authors implemented systems that provide support for sequential decision making [ 54 – 57 ]. Moreover, Mohammed and Benlamri [ 21 ] designed a system that could help to prescribe differential diagnosis recommendations. Grouping similar diagnosis patients can be useful to enhance the medication process. In this regard, Fernández-Breis et al. [ 58 ] created a framework to group the patients by identifying patient cohorts. Moreover, Kiourtis et al. [ 59 ] proposed a new device-to-device (D2D) protocol for short-distance health data exchange between a healthcare professional and a citizen utilizing a sequence of Bluetooth communications. Supplying medical information to people is one of the main tasks of e-healthcare services [ 58 ]. Before proceeding with a medical diagnosis, we need to be sure about the correctness of the procedure. Andreasik et al. [ 60 ] developed a Semantic Web-based framework to determine the correctness of medical procedures. Various systems for medical education were developed using Semantic Web technologies such as a web service delivery system [ 4 ], a web service searching system [ 61 ], and an e-learning framework for the patients to learn about different medical information [ 62 , 63 ]. Some articles discussed the rule-based approaches for the advancement of medical applications [ 64 , 65 ]. Quality assurance of Semantic Web services is necessary, and so a framework was created using a Semantic Web-based replacement policy to assure the quality of a set of services and replace it with a newly defined subset of services when the existing one fails in execution [ 3 ]. A framework was designed for Semantic Web-based data representation [ 66 ]. Meilender et al. [ 67 ] described the migration of Web 2.0 data into Semantic Web data for the ease of further advancement in Web 3.0.

Researchers used different Semantic Web services to convert the relational database to create Resource Description Framework (RDF) and Web Ontology Language (OWL)-based ontologies. It is done by extracting the instances from the relational databases and representing them into RDF datasets [ 21 , 55 , 57 , 62 ]. In some prior literature, many RDF datasets were created using Apache JENA 4.0 [ 4 ], different versions of protégé were used to construct and represent various healthcare ontologies [ 2 , 17 ], Apache Jena framework was used for OWL reasoning on the RDF datasets [ 50 , 53 ], and the EYE engine was used for reasoning [ 54 ]. Besides, Kiourtis et al. [ 68 ] developed a technique for converting healthcare data into its equivalent HL7 FHIR structure, which principally corresponds to the most used data structures for describing healthcare information. Furthermore, a sublanguage of F-logic named Frame Logic for Semantic Web Services (FLOG4SWS) and web services along with some features of Flora-2 was used to represent the ontology [ 51 ]. The authors of some papers used RDF and OWL for data representation of different ontologies [ 50 , 52 , 54 , 66 ]. Mohammed and Benlamri [ 21 ] offered a number of Semantic Web strategies for ontology alignment, such as ontology matching and ontology linking, and some used ontology mapping for the ontology alignment [ 58 , 66 ]. By combining RDF and semantic mapping features, Perumal et al. [ 69 ] provided a translation mechanism for healthcare event data along with Semantic Web Services and decision making. In addition, a linked data graph (LDG) is utilized to combine numerous publicly available medical data sources using RDF converters [ 49 ]. The works in [ 52 , 54 ] used Notation3 for data mapping. SPARQL was used as the query language for the database [ 2 , 17 , 50 , 52 , 57 ]. Besides, the Jena API was also used as a query language [ 21 ]. The Semantic Web's rules and logic were expressed in terms of OWL concepts using the Semantic Web Rule Language (SWRL) [ 55 , 57 ]. TopBraid Composer is used as the Semantic Web modeling tool [ 60 ].

There was no proof that the system created using semantic networks was able to share knowledge among healthcare services [ 2 , 46 , 48 ]. Researchers did not mention how a system can be integrated with different types of datasets in the world [ 2 , 47 ]. In their paper, Ramasamy et al. [ 3 ] did not mention whether the system could replace all types of services or not. Shi et al. [ 26 ] did not discuss the success rate of the datasets in their dataset information resource and the accuracies of different systems created with these datasets. No proper evaluation techniques have been given for linked data graph [ 49 ], Semantic Web service delivery systems [ 4 , 50 ], and Semantic Web reasoning system [ 52 , 53 ] in their studies. There is no discussion of the reliability and validity of numerous decision making and recommendation systems [ 21 , 54 , 70 ]. Podgorelec and Gradišnik [ 64 ] did not provide information about the betterment of the combined Semantic Web technologies and rule-based systems against other alternatives. Most of the articles discussed or offered various techniques to build different healthcare services, but there are only a few articles that implemented the proposed systems and tested them in a real-life context.

3.2. Diseases

This thematic area aims to specifically identify and discuss the contributions of Semantic Web technologies to reach interoperability of information in the healthcare sector and aid in the initial detection and nursing of diseases, such as diabetes, chronic conditions, cardiovascular disease, dementia, and so on. SW provides a framework to integrate medical knowledge and data for effective diagnosis and clinical service. They help to select patients, recognize drug effects, and analyze results by using electronic health data from numerous sources. The queryMed packages were proposed for pharmaco-epidemiologists that link medical and pharmacological knowledge with electronic health records [ 10 ]. This application searches for people with critical limb ischemia (CLI) with at least one medication or none at all and gives them healthcare recommendations. SW also emphasizes the study of phenotypes and their influence on personal genomics. The Mayo Clinic's project, Linked Clinical Data (LCD), facilitates the use of SW and makes it easier to extract and express phenotypes from electronic medical records [ 71 ]. It also emphasizes the use of semantic reasoning for the identification of cardiovascular diseases. Besides this, it aims to improve healthcare service quality for people suffering from chronic conditions. Proper planning and management are required for the better treatment and management of chronic diseases. Thus, the Chronic Care Model (CCM) provides knowledge-based acquisition to patients [ 72 ].

Ontology-based applications such as the Concept Unique Identifier (CUI) from Unified Medical Language System, Drug Indication Database (DID), and Drug Interaction Knowledge Base (DIKB) are widely used in the medical domain to establish mappings between medical terms [ 10 ]. In the context of ontology, the ECOIN framework uses a single ontology, multiple view approach that exploits modifiers and conversion functions for context mediation between different data sources [ 73 ]. To support clinical knowledge sharing through interaction models, the OpenKnowledge project has been initiated from different data sources [ 9 ], and K-MORPH architecture has been proposed for a unified prostate cancer clinical pathway.

Along with information sharing, medical data management is critical in the diagnosis of disorders like dementia. To establish a better diagnosis method for dementia, a medical information management system (MIMS) was designed using SW technologies through the extraction of metadata from medical databases [ 74 , 75 ]. In order to further eliminate the e-health information and knowledge sharing crisis, Bai and Zhang [ 76 ] suggested Integrated Mobile Information System (IMIS) for healthcare. It provides a platform to connect diabetic patients with care providers to receive proper treatment and diagnosis facilities at home. The Diabetes Healthcare Knowledge Management project also aims to ease decision support and clinical data management in diabetes healthcare processes [ 72 ].

To construct decision models for the Diabetes Healthcare Knowledge Management framework, tools such as Semantic Web Rule Language (SWRL), OWL, and RDF were used. This ontology-based knowledge framework provides ontologies, patient registries, and an evidence-based healthcare resource repository for chronic care services [ 72 ]. Web Ontology Language (OWL), Resource Description Framework (RDF), and SPARQL were also commonly used for the creation of metadata in dementia diagnosis [ 77 ]. On the other hand, the Semantic Web-based retrieval system for the pathology project, known as “A Semantic Web for Pathology,” involves building and managing ontology for lungs which was made up of common semantic tools RDF and OWL which were used along with RDQL query language [ 20 ].

Even though effective frameworks were proposed to diagnose certain diseases, research gaps still exist that affect medical data management. For instance, the fuzzy technique-based service-oriented architecture has proved to be beneficial in terms of adjustability and reliability. But still, in the context of domain-specific ontologies, the applicability of this architecture is yet to be validated [ 78 ]. Effective distribution of knowledge into the existing healthcare system is a huge challenge in augmenting decision making and improving the care service quality. Therefore, future works are intended to focus on embedding knowledge and conducting user evaluations for better disease management.

3.3. Information Management

Managing patients' information and storing test results are significant tasks in the medical and healthcare industries. The application of the SW-based approach in this area can make an influential impact on this data organization. Such an approach to gather valuable and new medical information was primarily made by creating a network of computers [ 79 ]. Domain ontology was created according to the user's choice, suggesting medical terminologies to retrieve customized medical information [ 80 ]. RDF datasets can be used to find the trustworthiness of intensive care unit (ICU) medical data [ 70 ]. The SW has also been used to document healthcare video contents [ 81 ] and radiological images to provide appropriate information about those records [ 82 ].

However, moving from the conventional web-based information management to the Semantic Web had some reasons. As medical knowledge is essential to verify and share across hospitals and medical centers, introducing the Semantic Web approach helped to achieve a proper mapping system [ 83 ]. A medical discussion forum based on the SW helped to exchange valuable data among healthcare practitioners to map-related information in the dataset [ 84 ]. The use of the fuzzy cognitive system in the SW also helped to share and reuse knowledge from databases and simplify maintenance [ 85 ]. This methodology also helped to improve data integration, analysis, and sharing between clinical and information systems and researchers [ 86 ]. Moving towards this approach also aided the researchers in connecting different data storage domains and creating effective mapping graphs [ 87 ].

Though the approach of SW in healthcare has a broad area, most applications are pretty similar. The framework mainly proposed the use of RDF, SPARQL, and OWL [ 4 , 76 ]. Link relevance methods were used to produce semantically relevant results to extract pertinent information from domain knowledge [ 49 ]. Ontology-based logical framework procedures and SMS architecture helped to organize the heterogeneous domain network [ 88 , 89 ].

Evaluating the system's performance is necessary to get the actual results. A Health Level 7 (HL7) messaging mechanism has been developed for mapping the generated Web Service Modeling Ontology [ 90 ]. However, there were some issues regarding the heterogeneity problem. JavaSIG API was used to generate the HL7 message to resolve these issues [ 91 ]. Some of the evaluation tools are not advanced enough to handle vast amounts of data. PMCEPT physics algorithms were used to verify the algorithm [ 92 ]. Abidi and Hussain [ 9 ] created two levels to characterize different ontological models to establish morphing. BioMedLib Search Engine creation for economic efficiency helped to develop a Semantic Web framework for rural people [ 25 ]. The Metamorphosis installation wizard converted the text format UMLS into a MySQL database UMLS in order to access a SPARQL endpoint [ 93 ].

However, the frameworks proposed in different statements were not implemented precisely, which created a gap in each framework. Some frameworks are proposed to integrate with the blockchain for additional security and privacy [ 23 , 94 – 96 ]. AI and IoT integration can also enhance system maintenance [ 1 ]. Hussain et al. [ 97 ] suggested a framework named Electronic Health Record for Clinical Research (EHR4CR), but they did not get any actual results from this framework in the real world [ 97 ]. The proposed framework's implementation result will provide more development on this.

3.4. Frontier Technology

In this segment, we critically analyze works that are primarily keen on how cutting-edge technologies like AI and computer vision can be applied to the medical field with the continuous advancement of science and technology. Semantic Web-enabled intelligent systems leverage a knowledge base and a reasoning engine to solve problems, and they can help healthcare professionals with diagnosis and therapy. They can assist with medical training in a resource-constrained environment. To illustrate, Haque et al. [ 8 ], Chondrogiannis et al. [ 98 ], Haque and Bhushan [ 99 ], and Haque et al. [ 24 ] created a secure, fast, and decentralized application that uses blockchain technologies to allow users and health insurance organizations to reach an agreement during the implementation of the healthcare insurance policies in each contract. To preserve the formal expression of both insured users' data and contract terms, health standards and Semantic Web technologies were used. Accordingly, significant work has been proposed by Tamilarasi and Shanmugam [ 100 ] which explores the relationship between the Semantic Web, machine learning, deep learning, and computer vision in the context of medical informatics and introduces a few areas of applications of machine learning and deep learning algorithms. This study also presents a hypothesis on how image as ontology can be used in medical informatics and how ontology-based deep learning models can help in the advancement of computer vision.

The real-world healthcare datasets are prone to missing, inconsistent, and noisy data due to their heterogeneous nature. Machine learning and data mining algorithms would fail to identify patterns effectively in this noisy data, resulting in low accuracy. To get these high-quality data, data preprocessing is essential. Besides, RDF datasets representing healthcare knowledge graphs are very important in data mining and integrating IoT data with machine learning applications [ 8 , 101 ]. RDF datasets are made up of a distinguishable RDF graph and zero or more named graphs, which are pairings of an IRI or blank node with an RDF graph. While RDF graphs have formal model-theoretic semantics that indicate which world configurations make an RDF graph true, there are no formal semantics for RDF datasets. Unlike traditional tabular format datasets, RDF datasets require a declarative SPARQL query language to match graph patterns to RDF triples, which makes data preprocessing more crucial. In the context of data preprocessing, Monika and Raju [ 101 ] proposed a cluster-based missing value imputation (CMVI) preprocessing strategy for preparing raw data to enhance the imputed data quality of a diabetes ontology graph. The data quality evaluation metrics R2, D2, and root mean square error (RMSE) were used to assess simulated missing values.

Nowadays, question-answering (QA) systems (e.g., chatbots and forums) are becoming increasingly popular in providing digital healthcare. In order to retrieve the required information, such systems require in-depth analysis of both user queries and records. NLP is an underlying technology, which converts unstructured text into standardized data to increase the accuracy and reliability of electronic health records. A Semantic Web application has been deployed for question-answering using NLP where users can ask questions about health-related information [ 27 ]. In addition, this study introduces a novel query simplification methodology for question-answering systems, which overcomes issues or limitations in existing NLP methodologies (e.g., implicit information and need for reasoning).

The majority of contributions to this category have organized their work using semantic languages on a smaller scale. Besides, it is noteworthy that hardly any of the approaches, except [ 27 , 101 ], adopted a framework for developing their models. Asma Ben et al. used a benchmark (corpus for evidence-based medicine summarization) to evaluate the question-answering (QA) system and analyzed the obtained outcomes [ 27 ]. Some studies have not included a prior literature review for the discovery of available frontier services [ 100 ]. In addition, the study shows that with the soaring demand for better, speedier, more accurate, and personalized patient treatment, deep learning powered models in production are becoming increasingly prevalent. Often these models are not easily explainable and prone to biases. Explainable AI (XAI) has grown in popularity in healthcare due to its extraordinary success in explaining decision-making criteria to systems, reducing unintended outcomes and bias, and assisting in gaining patients' trust—even when making life-or-death decisions [ 102 ]. To the best of our knowledge, XIA has gleaned attention on ontology-based data management but received relatively little attention on collaborating Semantic Web technologies across healthcare, biomedical, clinical research, and genomic medicine. Similarly, within the IoT system spectrum, invocation of semantic knowledge and logic across various Medical Internet of Things (MIoT) applications, gathering vast amounts of data, monitoring vital body parameters, and gathering detailed information from sensors and other connected devices, as well as maintaining safety, data confidentiality, and service availability also received relatively little attention.

3.5. Regulatory Conditions

This segment concentrates on Semantic Web-based tools, technologies, and terminologies for documenting the semantics of medical and healthcare data and resources. As the healthcare industries generate a massive amount of heterogeneous data on a global scale, the use of a knowledge-based ontology on such data can reduce mortality rate and healthcare costs and also facilitate early detection of contagious diseases. Besides, the SW provides a single platform for sharing and reusing data across apps, companies, and communities. The biomedical community has specific requirements for the Semantic Web of the future. There are a variety of languages that can be used to formalize ontologies for medical healthcare, each with its expressiveness. A collaborative effort led by W3C, involving many research and industrial partners, set the requirements of medical ontologies. A real ontology of brain cortex anatomy has been used to assess the requirements stated by W3C in two available languages at that time, Protégé and DAML + OIL [ 103 ]. The development and comparative analysis contexts of brain cortex anatomy ontologies are partially addressed in this. In 2019, a survey-based study was conducted to determine faculty and researcher usage, impact, and satisfaction with Web 3.0 networking sites on medical academic performance [ 104 ]. This study explores the awareness and willingness to implement Web 3.0 technologies within healthcare at Rajiv Gandhi University of Health Sciences. The results of this study imply that Web 3.0 technologies have an impact on professor and researcher academic performance, with those who are tech-savvy being disproportionately found in high-income groups [ 104 ].

Documentation of semantic tools and data is required to resolve healthcare reimbursement challenges. Besides, regulations are also necessary to standardize semantic tools while ensuring that healthcare communities and systems adhere to general health policies. Unfortunately, we found only a few works focusing on this challenge based on SWT. Only the study conducted by Sugihartati [ 104 ] adopted a proper survey methodology. Therefore, future efforts should focus on regulating, documenting, and standardizing semantic tools, technologies, and health resources, as well as conducting user evaluations to understand and optimize functional efficiency and accelerate market access for medicines for general health.

Tables ​ Tables3 3 ​ 3 – 5 provide a detailed analysis of the studied works for the derived five categories.

Summarization of the research contribution of the selected articles.

ThemesContributions
(i) An ontology-based semantic server for healthcare organizations to exchange information among them [ ].
(ii) Discussed healthcare data interoperability and integration plan of the solution [ ].
(iii) Used Semantic Web terms (SWT) to provide oral medicine knowledge and information [ ] and to build a decision support system [ ].
(iv) Developed a prototype that generates the desired reports using a high degree of data integration and discussed a production rule-based approach to establish a link between prevalent diseases and the range of the diseases in a particular gene [ ].
(v) Represented global ontology via bridge methods to avoid conflicts among different local ontologies [ ].
(vi) Implemented a WSMO (Web Service Modeling Ontology) automated service delivery system [ ].
(vii) Designed a system for automatic alignment of user-defined EHR (electronic health record) workflows [ ].
(viii) Proposed an upper-level-ontological service providing a mechanism to provide integrity constraints of data and to improve the usability of the medical linked data graph (LDG) services [ ].
(ix) Developed a chatbot and a triaging system that provides information about diseases, screens users' problems, and sorts patients into groups based on the user's needs [ ].
(x) Developed a healthcare dataset information resource (DIR) to hold dataset information and respond to parameterized questions [ ].
(xi) A healthcare service framework that coordinates web services to locate the closest hospital, ambulance service, pharmacy, and laboratory during an emergency [ ].
(xii) Used web service replacement policy to build a Semantic Web service composition model which replaces a set of services with a generated service subset when the previous set of services fails in execution [ ].
(xiii) Proposed ontology-based data linking to understand and extract medical information more precisely [ ].
(xiv) Integrated knowledge with clinical practice to provide guidelines in medicine [ ].
(xv) An abstraction method that converts XML-type medical information to RDF and OWL to create electronic health record (EHR) architecture for the identification of patient cohorts [ ].
(xvi) Designed a platform for solving complex medical tasks by interpreting algorithms and meta-components [ ].
(xvii) Provided a strategy for suggestions in view of clients' likeness figuring and exhibited the adequacy of the model suggested through configuration, execution, and examination in social learning environments [ ].
(xviii) Constructed semantic relationships of input and output medical-related parameters to resolve conflicts and algorithms that remove the redundancy of web service paths [ ].
(xix) Used a management time and run time subsystem to discover the potential web services [ ].
(xx) Integrated weak inferring with a single and explanation-based generalization to leverage the complementary strengths [ ].
(i) Created an ontology to build and manage information about a particular disease [ ].
(ii) Developed a web-based prototype of Integrated Mobile Information System for healthcare of diabetic patients [ ].
(iii) Implemented embedded feedback between users and designers and communication mechanisms between patients and care providers [ ].
(iv) QueryMed package made the integration of clinical and pharmacological information that is used to distinguish all the medications endorsed for critical limb ischemia (CLI) and to recognize one contraindicated solution for one patient [ ].
(v) A semantics-driven system based on EMRs that can break down multifactorial phenotypes, like peripheral arterial disease and coronary heart disease [ ].
(vi) Discussed a way to deal with a unified prostate cancer clinical pathway by incorporating three different clinical pathways: Halifax pathway, Calgary pathway, and Winnipeg pathway [ ].
(vii) Demonstrated the achievability and tolerability of a distributed web-oriented environment as an effective study and approval technique for characterizing a real-life setting [ ].
(i) Proposed a brief process of integration for interoperability and scalability to create an ontology of inflammation [ ].
(ii) Discussed an indexing mechanism to extract attributes from an audio-visual web system [ ].
(iii) Developed ontology-enabled security enforcement for hospital data security [ ].
(iv) Semantic Web mining-based ontologies allow medical practitioners to have better access to the databases of the latest diseases and other information [ ].
(v) Proposed a medical knowledge morphing system to focus on ontology-based knowledge articulation and morphing of diverse information through logic-based reasoning with ontology mediation [ ].
(vi) The annotation image (AIM) ontology was created to give essential semantic information within photos, allowing radiological images to be mined for image patterns that predict the biological properties of the structures they include [ ].
(vii) Described a semantic data architecture where an accumulative approach was used to append data sources [ ].
(viii) Implemented a functional web-based remote MC system and PMCEPT code system, as well as a description of how to use a beam phase space dataset for dosimetric and radiation therapy planning [ ].
(ix) Discussed an approach using Notation3 (N3) over RDF to present a generic approach to formalizing medical knowledge [ ].
(x) It was demonstrated that in the healthcare domain, knowledge management approaches and the synergy of social media networks may be used as a foundation for the creation of information system (IS). This helps to optimize data flow in healthcare processes and provides synchronized knowledge for better healthcare decision making (cardiac diseases) [ ].
(xi) Using semantic mining principles, the authors described a technique for minimizing information asymmetry in the healthcare insurance sector to assist clients in understanding healthcare insurance plans and terms and conditions [ ].
(xii) Discussed a mapping-based approach to generate Web Service Modeling Ontology (WSMO) description from HL7 (Health Level 7) V3 specification where Messaging Modeling Ontology (MMO) is mapped with WSMO [ ].
(xiii) Designed a web crawler-based search engine to gather medical information as per patients' needs [ ].
(xiv) A framework where patients can get relevant medical information from a personalized database, where the patient's medical history and current health condition are captured and then analyzed to search for particular information regarding the patient's needs [ ].
(xv) Demonstrated an Electronic Health Record for Clinical Research (EHR4CR) semantic interoperability approach for bridging the clinical care and clinical research domains [ ].
(xvi) SNOMED-CT ontologies were used to map big laboratory datasets with metadata in the form of clinical concepts [ ].
(xvii) An online medical discussion forum where practitioners can start a topic-specific discussion and then the platform analyzes centrality measurements and semantic similarity metrics to find the most prominent practitioners in a discussion forum [ ].
(xviii) Developed a UMLS-OWL conversion system to translate UMLS content into an OWL 2 ontology that can be queried and inferred via a SPARQL endpoint [ ].
(xix) Researchers used SPA to detect illness and connect to the most excellent specialist. Besides, they recounted a schema representing a database query enabling doctors to pick and determine the most suitable EHR and patient data in healthcare scenarios [ ].
(xx) The Semantic Web, blockchain, and Graph DB were combined to provide a patient-centric perspective on healthcare data in a cooperative healthcare system [ ].
(i) A cluster-based missing value imputation (CMVI) preprocessing strategy for preparing raw data is designed to enhance the imputed data quality of a diabetes ontology graph [ ].
(ii) Presented hypotheses on how image as ontology can be used in medical informatics and how ontology-based deep learning models can help computer vision [ ].
(iii) Discussed a deep learning technique called the ontology-based restricted Boltzmann machine (ORBM) that can be used to gain an understanding of electronic health records (EHRs) [ ].
(iv) Developed a Semantic Web app for question-answering using NLP where users can question about health-related information [ ].
(i) One needs DAML + OIL to express sophisticated taxonomic knowledge, and rules should aid in the definition of dependencies between relations and use predicates of arbitrary, while metaclasses may be useful in taking advantage of current medical standards [ ].
(ii) Described using Web 3.0-based social application for medical knowledge and communication with others and with faculty members [ ].
(iii) The impact of Web 3.0 awareness on the academic performance of Rajiv Gandhi University of Health Sciences faculty and researchers was investigated [ ].
(iv) Users can insert structured clinical information in the domains using SNOMED-CT terms [ ].
(v) Demonstrated the congruence between health informatics and Semantic Web standards, obstacles in representing Semantic Web data, and barriers in using Semantic Web technology for web service [ ].
(vi) The significant qualities of a Semantic Web language for medical ontologies were discussed [ ].

Summarization of the research gaps and future research avenues.

ThemesResearch gaps
(i) No interoperable healthcare system has yet been deployed [ ].
(ii) Researchers have not yet looked into the policy limits of video as ontologies at an organizational level [ ].
(iii) Prior research has focused solely on the limitations and policies of expanding an existing healthcare delivery system to directly recommend medications to users without the assistance of medical professionals [ ].
(iv) Scholars are yet to investigate how Web 3.0 can be used to promote education through resource sharing [ ].
(v) There is a dearth of studies on the role of existing healthcare applications in detecting patient severity levels based on the health data collected from patients [ ].
(vi) Any prior studies did not take into account a system that can automatically determine, choose, and compose web services [ ].
(vii) The challenges in big data connectivity into RDF, as well as privacy and security concerns, that were not addressed [ ].
(viii) The extant literature includes only a few examples where researchers have developed a systematic clinical validation system based on the study [ ].
(ix) The prior literature still cannot seem to distinguish ways to improve the similarity score between service parameters using statistics-based strategies and natural language processing techniques [ ].
(x) Any previous studies on the Internet of Things domain did not consider the semantic interoperability assessment between healthcare data, services, and applications [ ].
(i) No previous work had proposed an ontology for a healthcare system to efficiently store ontological data with proper evaluation criteria that meet W3C standards [ ].
(ii) Researcher is yet to put them into practice a full-featured Integrated Mobile Information System for diabetic healthcare [ ].
(iii) No prior work is done in expanding the set of prebuilt queries of a particular disease to handle a wide range of use cases through possible linked data evolutions [ ].
(iv) Earlier studies did not consider mapping the triplets of one disease RDF to other existing medical services, applications, and administrations in order to conduct client assessments [ ].
(v) There is a deficit of research on the development of intelligent user interfaces that understand the semantics of clinical data [ ].
(i) Knowledge gap in the current research in indexing higher-quality videos for better attribute extraction [ ].
(ii) Indexing strategy for retrieving attributes from an audio-visual web system is yet to be addressed [ ].
(iii) Need for a greater understanding of Semantic Web applications related to web mining to build ontologies for healthcare websites [ ].
(iv) Prior literature addressed only modeling and annotation for a specific disease such as urinary tract infection diseases. The literature is yet to identify methods for generalizing clinical application models [ ].
(v) No studies on the asymmetry minimization system take into account both the insurer's and the existing patient's perspectives [ ].
(vi) The literature is yet to find ways to complete the WSMO generator from HL7 with a user interface [ ].
(i) The literature is yet to find ways so that web applications can combine natural language processing (NLP) and domain knowledge induction in decision making and automate medical healthcare services [ ].
(ii) The literature is yet to discover a technique to combine cloud computing, AI, and quantum physics with a platform to anticipate the chemical and pharmacological properties of small-molecule compounds for medication research and design [ ].
(i) Lack of information on the Semantic Web tools before the authors moved onto the architecture of the system [ ].
(ii) There was no emphasis on the semantic quality of available languages in any of the literature evaluation steps [ ].

Future research avenues in the form of research questions.

ThemesFuture research avenues
(i) What features should an interoperability framework contain in order to be considered complete [ ]?
(ii) What technologies are required to generate video file ontologies, and what are the drawbacks of doing so [ ]?
(iii) What approaches may healthcare organizations use to provide medical recommendations without consulting the medical practitioners directly [ ]?
(iv) How can the healthcare industry use Web 3.0 to boost medical education [ ]?
(v) What strategies can be applied to assess a patient's severity level based on the patient's collected health data [ ]?
(vi) What technologies can be utilized to create web services, and how can a system automatically determine and choose the optimal web services for it [ ]?
(vii) What kinds of security precautions should be considered while sharing information over the web [ ]?
(viii) When it comes to adopting a Web 3.0-based clinical validation system, what technological skills and facility-related challenges do researchers face? What steps should be taken to ensure that clinical processes are validated [ ]?
(ix) What strategies and techniques can healthcare organizations use to increase similarity scores between service parameters [ ]?
(x) How can semantic interoperability between healthcare data, services, and applications be assessed in the context of the Internet of Things [ ]?
(i) Which policies and regulations may ontological systems use to comply with W3C standards [ ]?
(ii) How can scholars expand a disease's set of queries to cover a wider range of use cases [ ]?
(iii) What will be the most effective user interface designs for massive data networks that can interpret the semantics of clinical data [ ]?
(i) What are the recommendations for indexing high-quality videos in Graph DB to increase attribute extraction [ ]?
(ii) What procedures must be followed in order to extract attributes from the data that are gathered from different audio-visual web systems [ ]?
(iii) Is it possible to improve the performance of the Web Service Modeling Ontology generator with a modified user interface [ ]?
(iv) Will the RDF ontology be able to replace web crawlers in terms of retrieving required data from the web [ ]?
(i) How can web applications automate medical healthcare services by combining natural language processing (NLP) with domain knowledge induction in decision making [ ]?
(ii) How could the Semantic Web platform anticipate the chemical and pharmacological properties of small-molecule compounds using cloud computing, quantum physics, and artificial intelligence [ ]?
(iii) What are the procedures to implement ontology-based restricted Boltzmann machine (ORBM) in electronic healthcare record (EHR) [ ]?
(i) Which techniques can be used to optimize NLP for transforming pathology report segments into XML [ ]?
(ii) What strategies and activities may developers employ to address semantic quality issues in existing languages [ ]?

4. Research Gaps

This systematic literature review presents a vast knowledge about the use of Web 3.0 or Semantic Web technology in different approaches to the medical and healthcare sector. By analyzing various kinds of literature, we recognized different research gaps to address future research avenues, which will enable scholars from different parts to examine the area and discover new developments. Table 4 summarizes the overall research gaps and Table 5 summarizes the future research avenues we encounter during the literature review.

4.1. Scope of E-Healthcare Service Research

Even though studies in the domain of e-healthcare services suggested and created numerous frameworks to provide vital support to the users, there are still research gaps among the methods. Several frameworks were proposed to facilitate data interoperability. However, based on what we know best, none of the proposed frameworks has been implemented in the actual world. Furthermore, there is no evidence of knowledge sharing among organizations using semantic network-based systems. Besides, just a handful of the research papers included assessment methodologies and a discussion of the findings. Furthermore, the frameworks that provide medical services such as disease reasoning, decision making, and drug recommendations lack reliability and validity. Most of the research articles suggested architectures but did not implement them, and their intended prototypes were never built.

4.2. Scope of Disease Research

Semantic Web technologies are being used in the healthcare sector to improve information interoperability and aid in identifying and treating diseases. Only a few studies among the 65 papers have examined the various frameworks for developing a fully functional system for either diabetic healthcare or disease collection of prebuilt queries. Earlier research also lacks mapping triplets of one illness RDF to other existing medical services, applications, and administrations. Researchers also lack the creation of intelligent user interfaces that grasp the semantics of clinical data. This paper shows that more study is required to efficiently use ontology in the healthcare sector to preserve data with proper evaluation criteria.

4.3. Scope of Information Management Research

Medical data are considered valuable information utilized to assist patients in receiving better care. It is challenging to implement Semantic Web technologies to store and search for data. Various studies attempt to adopt specific methods that may aid in the proper management of medical information; however, some gaps remain. There is no attempt to index high-quality videos and collect attributes for categorizing them. A validation gap exists due to the lack of suitable evaluation techniques. In most studies, RDF ontologies are used to collect information from websites and represent those data. However, no information is provided about how effective those models are in real-world applications.

4.4. Scope of Frontier Technology Research

Even though cutting-edge technology such as AI, ML, robotics, and the IoT has revolutionized the healthcare industry and helped improve everything from routine tasks to data management and pharmaceutical development, the industry is still evolving and looking for ways to improve. If we consider the aspect of research, the history of the Semantic Web and frontier technology is technically not new at all, yet the Semantic Web presents some limitations. Since the web began as a web of documents, converting each document into data is incredibly challenging. Various tools and approaches, such as natural language processing (NLP), may be used to do this task, but it would take a long time. However, only a small attempt has been made to integrate NLP and domain knowledge induction. Ontology and AI, and logic, have always been and will continue to be essential elements of AI development. Besides, connecting ontology to AI is frequently a problem in and of itself. Furthermore, because ontology trees often have a large number of nodes, real-time execution is problematic. Earlier studies have apparently failed to solve this problem. There have been significant attempts to incorporate the various aspects of IoT resources into ontology creation, such as connectivity, virtualization, mobility, energy, or life cycle [ 108 , 109 ]. The authors attempted to enhance the computerization of the health and medical industry by utilizing the Internet of Things (IoT) and Semantic Web technologies (SWTs), which are two key emerging technologies that play a significant role in overcoming the challenges of handling and presenting data searches in hospitals, clinics, and other medical establishments on a regular basis. Despite its significant efforts to collaborate different IoT spectrum and Semantic Web technologies, research gaps in medical data management persist. For instance, after its introduction, the Medical Internet of Things (MIoT) has taken an active role in improving the health, safety, and care of billions of people. Rather than going to the hospital for help and support, patients' health-related parameters can now be monitored remotely, constantly, consistently, and in real time and then processed and transferred to medical data enters via cloud storage. Because of cloud platforms' security risks, choosing one is a major technological challenge for the healthcare industry. Some of these cloud-based storage systems cannot adequately preserve patients' data and information regarding semantic data [ 6 , 8 ]. However, none of the research articles suggested any architectures, nor were any intended prototypes built to address these cloud security issues of MIoT in general.

4.5. Scope of Regulatory Condition Research

Regulations are paramount for the healthcare and medical industries to function properly. They support the global healthcare market, ensure the delivery of healthcare services, and safeguard patients,' doctors,' developers,' researchers,' and healthcare agents' rights and safety. The Semantic Web also has its detractors, like many other technologies, in terms of legislation and regulation. Historically, scaling medical knowledge graphs has always been a challenge. As a result of privacy and legal clarity, healthcare companies are not sufficiently incentivized to share their data as linked data. Only a few academic papers and documents disclose how these corporations use to automate the process. Furthermore, compared to other types of datasets, many linked datasets representing tools are of poor quality. As a result, applying them to real-world problems is highly challenging. Other alternatives, such as property graph databases like Neo4j and mixed models like OriendDB, have grown in popularity due to the RDF format's complexity. Healthcare application developers and designers prefer to use web APIs over SPARQL endpoint to send data in JSON format. This study illustrates that more research is needed to improve the semantic quality of available technologies (e.g., RDF, OWL, and SPARQL) to effectively use them in the healthcare industry to ease healthcare development.

5. Discussion

This section describes the findings from the selected studies based on answer to the research questions. Therefore, the readers will be able to map the research questions with the contribution of this systematic review.

5.1. (RQ1) What Is the Research Profile of Existing Literature on the Semantic Web in the Healthcare Context?

The research aims to determine the primary objectives of using the Semantic Web and the major medical and healthcare sectors where Semantic Web technologies are adopted. As the Semantic Web has shown incremental research trends in recent years, there is a need for a structured bibliometric study. This study collected data from the Scopus, IEEE Xplore Digital Library, ACM Digital Library, and Semantic Scholar databases, focusing on various aspects and seeing their affinity. We performed bibliometric analysis to look at essential details like preliminary information, country, author, and application area where these publications are being used for the Semantic Web in the context of healthcare. We conducted the bibliometric analysis using an open-source application called VOS viewer. The outcomes and specifics of the experiment are detailed in Section 2 .

As stated in the methodology section, our study consists of 65 documents. A number of prestigious conferences, publications, and events have published these healthcare-related articles. Out of these 65 shortlisted papers, 27 were presented in conferences, 21 in journals, and 17 from book chapters. Our study observes that the field of “Semantic Web in Healthcare” is not comparatively new. The first paper from the shortlisted documents on this topic was published in 2001. Since then, there has been minimal growth in this field, with 2007 appearing to be the start. Surprisingly, the maximum number of articles (8) published in this discipline was in 2013, but from 2013 to 2016, there was only a minor shift by researchers globally. It is most likely due to the introduction of Web 3.0 in 2014. It is yet to be found how Web 3.0 will effectively leverage the Semantic Web as a core component rather than seeing it as a competing technology in the medical healthcare field. The decrease in the number of articles shows how the interests of researchers switched from the Semantic Web to the emerging Web 3.0. However, the Semantic Web remains the top choice of medical practitioners as Web 3.0 evolves. Furthermore, the United States is the country with the most research papers, followed by France and India (see Figure 4 ). It implies that both developed and emerging countries use the Semantic Web in their healthcare industries. VOS viewer also discovered 35 works titled to be published in Computer Science, 16 in Engineering, 9 in Medicine, and 5 in Mathematics. We also used the VOS viewer software to visually represent the keyword co-occurrences from those shortlisted 65 publications. The total number of keywords was 774. The minimum number of times a keyword appears is set at 5. The terms that occurred more than five times in all texts are included in our representation. We found 76 keywords that meet our requirements. Figure 7 shows our findings in a co-occurrence graph containing the other essential phrases. As expected, Semantic Web and healthcare are the most occurring keywords, and both are mentioned 55 times. Following that, web services, decision support systems, interoperability, etc. are listed. These terms are used to categorize the Semantic Web's application areas in healthcare.

An external file that holds a picture, illustration, etc.
Object name is IJCLP2022-6807484.007.jpg

Co-occurrence network of the index's keywords.

Our analysis also reveals that most proposed frameworks for improving and expanding the healthcare system do so without the involvement of health professionals. Some of them discussed data interoperability, diseases, frontier technologies, and regulatory issues, while others emphasized the use of video as ontologies and video conferences in bridging communication gaps. The majority of the publications only propose frameworks with no implementation. Web services currently merely make services available, with no automatic mechanism to connect them in a meaningful way.

5.2. (RQ2) What Are the Primary Objectives of Using the Semantic Web, and What Are the Major Areas of Medical and Healthcare Where Semantic Web Technologies Are Adopted?

The adoption of the Semantic Web in healthcare strives to improve collaboration, research, development, and organizational innovation. The Semantic Web has two primary objectives: (1) facilitating semantic interoperability and (2) providing end-users with more intelligent support. Semantic interoperability, a key bottleneck in many healthcare applications, is one of today's major problems. Semantic Web technologies can help with data integration, knowledge administration, exchange of information, and semantic interoperability between healthcare information systems. It focuses on building a web of data and making it appropriate for machine processing with little to no human participation. So, healthcare computer programs can better assist in finding information, personalizing healthcare information, selecting information sources, collaborating within and across organizational boundaries, and so on by inferring the consequences of data on the Internet.

Based on our review of the findings, we found five application domains where the Semantic Web is being adopted in the healthcare context. This study will brief those domains from Sections 5.2.1 to 5.2.5 as well as justify them in relation to healthcare.

5.2.1. E-Healthcare Service

More than two-fifth of the total studies (65) considered in this study is about e-healthcare services (see Table 2 ). These studies focus on ways to use the Internet and related technologies to offer and promote health services and information, as well as diagnosis recommendation systems and online healthcare service automation.

In this study, researchers developed a web-based prototype that generates the required reports with a high degree of data integration and a rule-based production technique for establishing a link between prevalent diseases and the range of diseases in a specific gene [ 64 ].

Another group of e-healthcare service studies focused on how current electronic information and communication technology could help people's health and healthcare [ 46 , 49 , 50 , 61 – 64 , 97 ]. Most of the authors used a WSMO (Web Service Modeling Ontology) service delivery platform and an automatic alignment of user-defined EHR (electronic health record) workflows, where service owners can register a service, and the system will automate prefiltering, discovery, composition, ranking, and invocation of that service to provide healthcare.

The adoption of e-healthcare in developing countries has shown to be a feasible and effective option for improving healthcare. It allows easy access to health records and information and reduces paperwork, duplicate charges, and other healthcare costs. If the proper implementation of e-healthcare technologies is ensured, everyone will benefit.

5.2.2. Diseases

Out of 65 articles, there are only 8 articles regarding the adoption of the Semantic Web in the diseases sector (see Table 2 ). These articles present a discussion on the deployment of a disease-specific healthcare platform, disease information exchange system, knowledge base generation, and research portal for a specialized disease.

This study developed a web-based prototype for an Integrated Mobile Information System (IMIS) for diabetic patient care [ 20 ]. The authors used ontology mapping so that related organizations could access each other's information. They also embedded feedback and communication mechanisms within the system to include user feedback.

Another study developed queryMed packages for pharmaco-epidemiologists to access and link medical and pharmacological knowledge to electronic health records [ 10 ]. The authors distinguished all the medications endorsed for critical limb ischemia (CLI) and recognized one contraindicated solution for one patient.

Disease management/prediction systems are necessary for finding the hidden knowledge within a group of disease data and can be used to analyze and predict the future behavior of diseases. An all-in-one strategy rarely works in the healthcare industry. It is critical to develop a personalized and contextualized disease prediction system to enhance user experience.

5.2.3. Information Management

Almost two-fifths of the total studies considered in this study (65) are about information management (see Table 2 ). After e-healthcare service, this category has the most studies. These articles are particularly about healthcare management systems, medical information indexing, healthcare interoperability systems, decision making, coordination, control, analysis, and visualization of healthcare information.

This study presented a medical knowledge morphing system that focuses on ontology-based knowledge articulation and morphing of heterogeneous information using logic and ontology mediation [ 105 ]. The authors used high-level domain ontology to describe fundamental medical concepts and low-level artifact ontology to capture the content and structure.

In another study, an annotation image (AIM) ontology was developed to provide important semantic information within photographs, allowing radiological images to be mined for image patterns that predict the structures' biological features. The authors transformed XML data into OWL and DICOM-SR to control ontological terminology in order to create image annotation.

A well-designed healthcare information system is required for management, evaluation, observations, and overall quality assurance and improvement of key stakeholders of the health system. Even though a significant amount of work is done in this sector, it is far from sufficient. It is something on which we should focus.

5.2.4. Frontier Technology

We found only 3 publications on frontier technology (see Table 2 ). These articles describe healthcare application domains that use AI, machine learning, or computer vision to automate medical coding, generate medical informatics, and deal with intelligent IoT data and services.

The first review article is about a method for preprocessing raw cluster-based missing value imputation (CMVI), with the goal of improving the imputed data quality of a diabetes ontology graph [ 27 ]. Their findings show that preprocessed data have better imputation accuracy than raw, unprocessed data, as measured against coefficient of determination (R2), index of agreement (D2), and root mean square error (RMSE).

Another article talks about ideas on how image as ontology can be used in health informatics and how deep learning models built on ontologies can support computer vision [ 100 ].

Frontier technology such as AI, ML, and IoT offers many advantages over traditional analytics and clinical decision-making methodologies. At a granular level, those technologies provide

  • Increased efficiency.
  • Better treatment alternatives.
  • Faster diagnosis.
  • Faster drug discovery.
  • Better disease outbreak prediction.
  • Medical consultations with patients with little or no participation of healthcare providers.

There is a lack of research on the integration of frontier technologies with the Semantic Web. Researchers should focus their efforts on this area. Students must take the initiative to develop creative technological inventions.

5.2.5. Regulatory Conditions

There were only 3 publications that used Semantic Web technology to address regulatory conditions (see Table 2 ). These studies focus on the challenges and requirements of the Semantic Web and technologies that represent the Semantic Web, awareness, and policy and regulations.

An article describes how to design, operate, and extend a Semantic Web-based ontology for an information system of pathology [ 103 ]. The authors of this paper highlight what technologies, regulations, and best practices should be followed during the entire lung pathology knowledge base creation process.

Another study talks about the challenges of integrating healthcare web service composition with domain ontology to implement diverse business solutions to accomplish complex business logic [ 104 ].

Privacy and regulation are important in establishing a clear framework within which healthcare providers, patients, healthcare agents, and healthcare application developers can learn and maintain the skills needed to provide high-quality health services which are safe, productive, and patient-centered. From these regulatory condition-type articles, we can understand whether technology is easy to use, has challenges, and is emerging, secure, and valuable to the healthcare community. We need to do more work on this.

5.3. (RQ3) Which Semantic Web Technologies Are Used in the Literature, and What Are the Familiar Technologies Considered by Each Solution?

This section discusses the various Semantic Web technologies used in the literature, as well as the most common ones among them. There are numerous Semantic Web technologies available that make the applications more advanced. The healthcare industry makes extensive use of these Semantic Web technologies. As a result of these technologies, the healthcare industry is getting more advanced. The most prevalent Semantic Web technologies that are used in the healthcare sector are Resource Description Framework (RDF), Web Ontology Language (OWL), SPARQL Protocol and RDF Query Language (SPARQL), Semantic Web Rule Language (SWRL), Web Service Modeling Ontology (WSMO), Notation3 (N3), SPARQL Inferencing Notation (SPIN), Euler Yap Engine (EYE), Web Service Modeling Language (WSML), and RDF Data Query Language (RDQL).

Various Semantic Web technologies are used to accomplish various goals, such as converting relational databases to RDF/OWL-based databases, data linking, reasoning, data sharing, data representation, and so on. Ontologies are considered the basis of the Semantic Web. All of the data on the Semantic Web are based on ontologies. To take advantage of ontology-based data, it must first be transformed into RDF-based datasets. The RDF is an Internet standard model for data transfer that includes qualities that make data merging easier, as well as the ability to evolve schemas over time without having to update all of the data [ 52 ]. The majority of the researchers utilized RDF to represent the linked data and interchange data. In the Semantic Web, Notation3 is used as an alternative to RDF to construct notations. It was created to serialize RDF models and it supports RDF-based principles and constraints. Humans can understand Notation3-based notations more easily than RDF-based notations. In addition to RDF, OWL is employed in the research articles to express ontology-based data. The OWL is a semantic markup language for exchanging and distributing ontologies on the web [ 52 ]. Furthermore, there is a second version of OWL available which is known as OWL2. The improved descriptive ability for attributes, enhanced compatibility for object types, simplified metamodeling abilities, and enhanced annotation functionality are among the new features added in OWL2. Numerous OWL-based ontologies are available on the web. OWL-S is one of them which is a Semantic Web ontology [ 78 ]. The OWL is also used for semantic reasoning. Combining Description Logic with OWL (OWL-DL) takes the reasoning capability to another level. OWL-DL provides desired algorithmic features for reasoning engines and is meant to assist the current Description Logic industry area [ 82 ]. As an alternative to OWL, EYE is used which is an advanced chaining reasoner with Euler path detection [ 85 ]. It uses backward and forward reasoning to arrive at more accurate conclusions and results. To query the RDF and OWL-based datasets, the scholars made use of SPARQL. SPARQL is the sole query language that may be used to query RDF and OWL-based databases. However, RDQL was employed as a query language for RDF datasets in a study [ 20 ]. Only RDF datasets can be queried with it. In several papers, writing the semantic rules and constraints was necessary. So, they used SWRL which is a language for writing semantic rules based on OWL principles. Alongside SWRL, scholars used SPIN which is a rule language for the Semantic Web that is based on SPARQL [ 60 ]. In the Semantic Web, specifying web services for different purposes is essential. In this regard, some research papers discussed leveraging the WSMO which is a Semantic Web framework for characterizing and specifying web services in a semantic way. A linguistic framework called WSML is used to express the Semantic Web services specified in WSMO. The WSML is a syntactic and semantic language framework for describing the elements in WSMO [ 48 ]. Tables ​ Tables4 4 ​ 4 ​ ​ – 8 summarize the Semantic Web technologies employed in different thematic research areas. Section 3 has detailed information regarding the discussion.

Summary of Semantic Web technologies used in e-healthcare services.

ReferencesRQ3RQ4
[ ]SPARQL×
[ ]××
[ ]RDF, OWL2, SPARQL, WSMO×
[ ]OWL, SPARQL×
[ ]OWL×
[ ]RDF, SPARQL×
[ ]OWL, WSML×
[ ]RDF, OWL×
[ ]OWL×
[ ]RDF, OWL, SPARQL×
[ ]RDF, SPARQL×
[ ]WSMOFlora-2 Expression
[ ]RDF, OWL, SPIN×
[ ]RDF, OWL, SPARQL, SWRL, Notation3×
[ ]RDF, OWL, SWRL×
[ ]RDF, OWL, Notation3×
[ ]RDF, OWL, SPARQL, SWRLOSHCO Validation
[ ]OWL, SWRL×
[ ]OWL, SPARQL, SWRL,Histopathology Method
[ ]RDF, OWL, SPARQL, SWRL, SPIN×
[ ]RDF, OWL, SPARQL, SWRLWS Composition System
[ ]××
[ ]××
[ ]SPARQL×
[ ]RDF×
[ ]RDF, OWL×
[ ]RDF, OWL, SPARQL×

Summary of Semantic Web technologies used in diseases.

ReferencesRQ3RQ4
[ ]RDF, OWL, SPARQL×
[ ]RDF, OWL, RDQL×
[ ]RDF, OWL, SPARQL×
[ ]RDF, OWL, SWRL×
[ ]××
[ ]RDF, OWL, SPARQL×
[ ]××
[ ]RDF, OWL, OWL-S×

Summary of Semantic Web technologies used in information management.

ReferencesRQ3RQ4
[ ]××
[ ]RDF, OWL×
[ ]RDF, OWL, SPARQLBioMedLib (Deployment Model)
[ ]RDF, OWL×
[ ]OWL, SWRL×
[ ]RDF, SPARQL×
[ ]××
[ ]OWL, OWL-DL×
[ ]RDF, SPARQLD2RQ Framework
[ ]RDF, SPARQL×
[ ]RDF, Notation3, EYE×
[ ]RDF, OWL, SPARQL, SWRL×
[ ]RDF, OWL×
[ ]××
[ ]××
[ ]OWL-S, WSMO×
[ ]RDF, OWL-S, WSMO×
[ ]××
[ ]OWL, OWL2, SPARQL×
[ ]RDF, OWL, SPARQL×
[ ]××
[ ]RDF, OWL, SPARQL×
[ ]RDF, SPARQL×
[ ]×Stovanojic's Ontology Evolution and Management Process

Table 6 summarizes Semantic Web technologies used in e-healthcare services. In this field of theme research, RDF, OWL, and SPARQL are the most commonly utilized technologies. Researchers employed RDF and OWL to construct RDF-based datasets, represent RDF datasets, and develop links between data. As an alternative to RDF, an article used Notation3 to construct RDF notations which are easier to read than RDF-based notations. In a paper, the scholars used OWL2, the second version of OWL, to utilize the latest features offered by the technology. For all of the articles, SPARQL is the only query language utilized to query the datasets. To construct rules and limits for the systems, most of the articles used SWRL. In addition to SWRL, an article used the SPIN to generate semantic rules and constraints. Furthermore, SPIN has not been used in any other research area. Besides, two articles used WSMO for the identification of Semantic Web services required for the systems. On the other hand, three articles in this theme did not use any Semantic Web technology.

Table 7 summarizes Semantic Web technologies used in diseases. Similar to the preceding thematic research area, RDF, OWL, and SPARQL are the most frequently used technologies. Also, the motivations for using these technologies are identical. However, an article utilized RDQL as an alternative to SPARQL to conduct queries on RDF datasets. SWRL was used to construct rules and limitations, just as it had been previously. It is also worth noting that a study built a model using the OWL-S, an OWL-based semantic ontology. Then, there is a study in this field that did not utilize any Semantic Web technology at all.

Table 8 summarizes Semantic Web technologies used in information management. Nine distinct Semantic Web technologies are used in this thematic research area. RDF, OWL, and SPARQL, like the previous topic groups, are the most extensively used technologies. It is worth repeating that the technologies' goals are the same as they were previously. In addition, the usage of Notation3 for more accessible RDF notations, OWL2 to take advantage of new capabilities, OWL-S semantic ontology as the data source, and WSMO to identify Semantic Web services are also mentioned in this thematic area. In this field of research, there are two new technologies that are not present in prior fields. OWL-DL, which combines OWL with Description Logic for information reasoning, is one of the new technologies. The other one is EYE reasoner, which is also a reasoning engine. On the contrary, a significant proportion of articles, six to be exact, did not employ any Semantic Web technologies.

Table 7 summarizes Semantic Web technologies used in frontier technology. In this thematic study field, there are just three articles, and two of them did not employ any kind of Semantic Web technology. The other paper includes RDF and SPARQL, which were very commonly used in the prior thematic research fields.

Table 8 summarizes Semantic Web technologies used in regulatory conditions. Only one of the two articles in this research area includes Semantic Web technology. Also, the sole semantic technology used in the article is RDF for the purpose of semantic data representation.

There are different applications of Semantic Web technologies in the articles, but most of the technologies are common in several articles. The most commonly used Semantic Web technologies are the SPARQL query language, RDF, OWL, and SWRL. Almost 80 percent of the analyzed papers used different functionalities of RDF. Furthermore, OWL and SPARQL technologies were used in nearly three-quarters of the articles. Besides, SWRL technology was applied in one-third of the analyzed studies. It is now obvious that these technologies have the potential to improve the healthcare industry.

5.4. (RQ4) What Are the Evaluating Procedures Used to Assess the Efficiency of Each Solution?

The suggested technologies and procedures for evaluating these works are included in this category. In truth, assessing the designed healthcare system's quality, performance, and utility is a crucial responsibility. Because the healthcare industry is highly sensitive, suitable evaluation standards are necessary. Due to technological limitations, however, the evaluation system is not well organized or maintained. Because the notion of Semantic Web technology is new in the medical field, overall development and evaluation are inadequate.

In the e-healthcare service-based theme (see Table 6 ), the authors in [ 51 ] established a set of setups to test the matcher's efficiency for scalability in terms of the number of Semantic Web services for medical appointments and their complexity. They consider the logical complexity of Flora-2 expressions used in pre and post-conditions, which can handle various web service and goal descriptions, including ontology consistency check. Some other evaluation methods like OSHCO validation for automatic decision support in medical services were also introduced by the authors in [ 57 ]. An experiment was established to assess the system utilizing two metrics via WS datasets, the execution time measurement and the correctness measurement, for graph-based Semantic Web services for healthcare data integration [ 62 ] and histopathology for evaluating the performance of semantic mappings [ 58 ].

However, only two publications presented evaluation procedures from the vast portion of information management system-related work (see Table 7 ). Tonguo et al. [ 25 ] used BioMedLib to evaluate a system that takes a user's search query and pulls articles from millions of national biomedical article databases. Another one used evaluation criteria like D2RQ for default semantic mapping generation [ 83 ].

In terms of frontier technology (see Table 9 ), the cluster-based missing value imputation algorithm (CMVI) was used to extract knowledge in the Semantic Web's healthcare domain [ 101 ]. The imputation accuracy was measured using a couple of well-known performance metrics, namely, coefficient of determination (R2) and index of agreement (DK), along with the root mean square error (RMSE) test. In addition, various open-domain question-answer evaluation campaigns such as TREC21, CLEF22, NTCIR23, and Quaero24 have been launched to evaluate a Semantic Web and NLP-based medical questionnaire system [ 27 ].

Summary of Semantic Web technologies used in frontier technology.

ReferencesRQ3RQ4
[ ]RDF, SPARQLQA Evaluation (TREC, CLEF, NTCIR, Quaero)
[ ]××
[ ]×Root mean square error (RMSE)

None of the writers provide any evaluation methodologies connected to diseases and regulatory conditions (see Tables ​ Tables9 9 and ​ and10). 10 ). To assess the consequences of Semantic Web discussions on specific diseases, well-designed evaluation criteria are required. As studies focus on the obstacles and problems of the Semantic Web in healthcare services, the necessity of evaluation is also missing in regulatory conditions.

Summary of Semantic Web technologies used in regulatory conditions.

ReferencesRQ3RQ4
[ ]RDF×
[ ]××
[ ]OWL×

5.5. (RQ5) What Are the Research Gaps and Limitations of the Prior Literature, and What Future Research Avenues Can Be Derived to Advance Web 3.0 or Semantic Web Technology in Medical and Healthcare?

The healthcare industry is on the verge of a real Internet revolution. It intends to bring in a new era of web interaction through the adoption of the Semantic Web, with significant changes in how developers and content creators use it. This web will make healthcare web services, applications, and healthcare agents more intelligent and even provide care with human-like intelligence by utilizing an AI system. Despite the tremendous amount of innovation, it may bring its adoption in healthcare considerable challenges.

The problem with the “Semantic Web” is that it requires a certain level of implementation commitment from web developers and content creators that will not be forthcoming. First, a large portion of existing healthcare web content does not use semantic markup and will never do so due to a lack of resources to rewrite the HTML code. Second, there is no guarantee that new healthcare content will utilize semantic markup because it would need additional effort. However, it is essential to guide the Semantic Web developer community in the right direction so that they can help contribute to future medical healthcare development. The following are the primary obstacles the Semantic Web faces in general: (i) content availability, (ii) expanding ontologies, (iii) scalability, (iv) multilingualism, (v) visualization to decrease information overload, and (vi) Semantic Web language stability.

Furthermore, based on our thorough examination of the 65 publications, the following are some of the most technologically severe obstacles that the Semantic Web in general faces in the healthcare context and must overcome; future research may be able to alleviate a few of these challenges:

  • Integrated Data Issue . The vulnerability of interconnected data is one of the most significant challenges with Semantic Web adoption. All of a patient's health records and personal information are stored and interlinked to an endpoint, and a malicious party may gain control of one's life if the record is compromised.
  • Vastness . The current Internet contains a vast amount of healthcare records not yet semantically indexed; any reasoning system that wants to analyze all of these data and figure out how it functions will have to handle massive amounts of data.
  • Vagueness . As Semantic Web is not yet mature enough, applications cannot handle non-specific user queries adequately.
  • Accessibility . Semantic Web may not work on older or low-end devices; only highly configured devices will be able to manage web content.
  • Usability . It will be difficult for beginners to comprehend because the SPARQL queries are often used in websites and services.
  • Deceit . What if the information provided by the source is false and deceptive? Management and regulation have become crucial.

The study also identifies future research opportunities and gives research recommendations to the developer and researcher communities for each of the identified theme areas where the Semantic Web is being used in medical and healthcare (see Section 4 ). Tables ​ Tables4 4 and ​ and5 5 summarize the research gap and probable future research direction.

6. Conclusion

The purpose of this SLR is to discover the most recent advances in SW technology in the medical and healthcare fields. We used well-established research techniques to find relevant studies in prestigious databases such as Scopus, IEEE Xplore Digital Library, ACM Digital Library, and Semantic Scholar. Consequently, we were able to answer five significant RQs. We answered RQ1 by giving a bibliometric analysis-based research profile of the existing literature. The study profile includes information on annual trends, publishing sources, methodological approaches, geographic coverage, and theories applied (see Sections 2.4 and 5.1 ). We performed content analysis to determine the answers to RQ2 , RQ3 , and RQ4 ; we also identified research themes, with a focus on technical challenges in healthcare where SW technologies can be used (see Sections 3 and 5.2 – 5.4 ). Finally, the synthesis of prior literature helped us to identify research gaps in the existing literature and suggest areas for future research in RQ5 (see Section 5.5 and Tables ​ Tables4 4 and ​ and5). 5 ). The findings of this study have important implications for healthcare practitioners and scholars who are interested in the Semantic Web and how it might be used in medical and healthcare contexts.

The global digital healthcare market is growing to meet the health needs of society, individuals, and the environment. As a result, a substantial study is required to assist governments and organizations in overcoming technological challenges. We successfully reviewed 65 academic papers comprising journal articles, conference papers, and book chapters from prestigious databases. We have identified five thematic areas based on our research questions to discuss the objectives, solutions, and prior work of Semantic Web technology in the healthcare field. Among these, we observed that e-healthcare services and medical information management are the most discussed topics [ 105 , 107 ]. According to our findings, with the emergence of Semantic Web technology, integration, discovery, and exploration of medical data from disparate sources have become more accessible. Accordingly, medical applications are incorporating semantic technology to establish a unified healthcare system to facilitate the retrieval of information and link data from multiple sources. Most of the studies that we examined discussed the importance of knowledge sharing among clinicians and patients to develop an effective medical service. The frameworks described depended on the proper data distribution from various sources supported by specific technology interventions [ 24 ]. To answer patient queries, SW-based systems such as appointment matchmaking, quality assurance, and NLP-based chatbots have been proposed to improve healthcare services [ 24 , 111 , 112 ]. In short, the Semantic Web has huge potential and is widely regarded as the web's future, Web 3.0, which will present a new challenge and opportunity in combining healthcare big data with the web to make it more intelligent [ 6 , 113 ].

The analysis of the proposed solutions discussed in the papers helped us to identify the main challenges in healthcare systems. Besides that, this study also identifies future challenges and research opportunities for future medical researchers. We observed that most of the proposed solutions are yet to be implemented and many problems are only rudimentarily tackled so far. In conclusion, by exchanging knowledge among physicians, researchers, and healthcare professionals, the SW encourages improvement from the “syntactic” to “semantic” and finally to the “pragmatic” level of services, applications, and people. From the overall observation of the findings of this SLR, a future strategy will be to adopt some of the suggested solutions to overcome the shortcomings and open a new door for the medical industry. In the future, we will try to implement such solutions and eliminate the problems.

Data Availability

Conflicts of interest.

The authors declare that they have no conflicts of interest.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

Semantic Web

The Semantic Web is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C) . The goal of the Semantic Web is to make Internet data machine-readable.

This can be seen in various aspects of web development, one being semantic HTML as a way to give your markup meaning, microformats like schema.org or linked-data like json-ld . Another aspect is from the opposite perspective: Reading and interpreting data. This can be done with metadata via RDF .

Here are 1,256 public repositories matching this topic...

Rdflib / rdflib.

RDFLib is a Python library for working with RDF, a simple yet powerful language for representing information.

  • Updated Jun 11, 2024

digitalbazaar / jsonld.js

A JSON-LD Processor and API implementation in JavaScript

  • Updated May 31, 2024

semantalytics / awesome-semantic-web

A curated list of various semantic web and linked data resources.

  • Updated Jun 3, 2024

lowlighter / matcha

🍵 Drop-in semantic styling library in pure CSS. Highly customizable and perfect for simple websites and prototyping web apps!

  • Updated Jun 10, 2024

brettkromkamp / contextualise

Contextualise is an effective tool particularly suited for organising information-heavy projects and activities consisting of unstructured and widely diverse data and information resources

  • Updated Jun 8, 2024

google / schema-dts

JSON-LD TypeScript types for Schema.org vocabulary

  • Updated Mar 7, 2024

scrapinghub / extruct

Extract embedded metadata from HTML markup

  • Updated May 29, 2024

dbpedia-spotlight / dbpedia-spotlight

DBpedia Spotlight is a tool for automatically annotating mentions of DBpedia resources in text.

  • Updated Mar 8, 2018

nicolas-hbt / pygraft

Configurable Generation of Synthetic Schemas and Knowledge Graphs at Your Fingertips

  • Updated Jan 24, 2024

digitalbazaar / pyld

JSON-LD processor written in Python

  • Updated May 10, 2024

brettkromkamp / awesome-knowledge-management

A curated list of amazingly awesome articles, people, applications, software libraries and projects related to the knowledge management space

  • Updated Apr 12, 2024

SemanticMediaWiki / SemanticMediaWiki

🔗 Semantic MediaWiki turns MediaWiki into a knowledge management platform with query and export capabilities

  • Updated Jun 6, 2024

pysemtec / semantic-python-overview

(subjective) overview of projects which are related both to python and semantic technologies (RDF, OWL, Reasoning, ...)

  • Updated Oct 30, 2023

AtomGraph / LinkedDataHub

The low-code Knowledge Graph application platform. Apache license.

google / react-schemaorg

Type-checked Schema.org JSON-LD for React

  • Updated Apr 26, 2024

ruby-rdf / rdf

RDF.rb is a pure-Ruby library for working with Resource Description Framework (RDF) data.

  • Updated Jun 7, 2024

eclipse-rdf4j / rdf4j

Eclipse RDF4J: scalable RDF for Java

  • Updated Jun 9, 2024

davidesantangelo / api.rss

RSS as RESTful. This service allows you to transform RSS feed into an awesome API.

  • Updated May 20, 2021

lanthaler / JsonLD

JSON-LD processor for PHP

  • Updated Oct 2, 2023

linkml / linkml

Linked Open Data Modeling Language

Created by Tim Berners-Lee, James Alexander Hendler, Ora Lassila

Related Topics

  • Bibliography
  • More Referencing guides Blog Automated transliteration Relevant bibliographies by topics
  • Automated transliteration
  • Relevant bibliographies by topics
  • Referencing guides

SEMANTIC TECHNOLOGIES: FROM NICHE TO THE MAINSTREAM OF WEB 3? A COMPREHENSIVE FRAMEWORK FOR WEB INFORMATION MODELLING AND SEMANTIC ANNOTATION

  • Fefie Dotsika
  • Published 2012
  • Computer Science

82 References

Towards the new generation of web knowledge, semantic apis: scaling up towards the semantic web, cream: creating relational metadata with a component-based, ontology-driven annotation framework, ontoshare: using ontologies for knowledge sharing, embracing "web 3.0", mobilizing the semantic web with daml-enabled web services, the two cultures: mashing up web 2.0 and the semantic web, designing a core it artefact for knowledge management systems using participatory action research in a government and a non-government organisation, where the social web meets the semantic web, the semantic web: the roles of xml and rdf, related papers.

Showing 1 through 3 of 0 Related Papers

Linguistics, The University of Chicago

Topics in Semantics and Pragmatics

This course will provide a comprehensive overview of the empirical patterns, analytical challenges and broader theoretical issues surrounding a particular topic, such as information structure, presupposition, scalar implicature, binding, aspectual composition, nominal reference, and so forth.

semantic web thesis topics

Semantic Kernel

The latest news from the Semantic Kernel team for developers

semantic web thesis topics

Build AI Applications with ease using Semantic Kernel and .NET Aspire

authors

New LinkedIn Learning Course on Semantic Kernel Fundamentals

semantic web thesis topics

Build 2024 Recap: Bridging the chasm between your ML and app devs

semantic web thesis topics

Announcing the Release of Semantic Kernel Python 1.0.0

semantic web thesis topics

Announcing the General Availability of Semantic Kernel for Java

semantic web thesis topics

Semantic Kernel Time Plugin with Python

Use semantic kernel to create a restaurant bookings sample with python, connect logic apps’ 1,400 connectors to semantic kernel.

semantic web thesis topics

Meet the Semantic Kernel Team at Microsoft BUILD

semantic web thesis topics

Customer Case Study: Fujitsu Composite AI and Semantic Kernel

light-theme-icon

IMAGES

  1. Ontology on the Semantic Web

    semantic web thesis topics

  2. PPT

    semantic web thesis topics

  3. Ontology on the Semantic Web

    semantic web thesis topics

  4. Ontology on the Semantic Web

    semantic web thesis topics

  5. New thesis: Terminology and the Semantic Web

    semantic web thesis topics

  6. Introduction To The Semantic Web

    semantic web thesis topics

VIDEO

  1. Why the Semantic Web will never work

  2. Architecture Thesis Topics: Sustainability #architecture #thesis #thesisproject #design #school

  3. Introduction to Semantic MediaWiki

  4. Libraries and the Semantic Web

  5. Unconventional Thesis Topics for Graduate Architecture Students! #architecture

  6. Top 12 Thesis Topics in Education

COMMENTS

  1. Semantic Web Research Topics for MS PhD

    Semantic Web Research Topic ideas for MS, or Ph.D. Degree. I am sharing with you some of the research topics regarding Semantic Web that you can choose for your research proposal for the thesis work of MS, or Ph.D. Degree. Representing construction-related geometry in a semantic web context: A review of approaches.

  2. semantic web Latest Research Papers

    This paper links two important current concerns, the security of information and enforced online education due to COVID-19 with Semantic Web. The Steganography requirement for the Semantic web is discussed elaborately, even though encryption is applied which is inadequate in providing protection. Web 2.0 issues concerning online education and ...

  3. PDF Semantic Knowledge Graph Creation From Structured Data

    topic within the internet industry, both in public and private domains. Central within this technology is the concept of the Semantic Web, with the vision of providing semantics to the internet through standards set by the World Wide Web Consortium. But even though the Semantic Web vision was introduced in the early 2000s, the concept seems to ...

  4. PDF Semantic Web Topic Models: Integrating Ontological Knowledge and ...

    Semantic Web Topic Models: Integrating Ontological Knowledge and Probabilistic Topic Models. by Mehdi Allahyari B.S., University of Kashan, 2005 A Dissertation Submitted to the Graduate Faculty of The University of Georgia in Partial Ful llment of the Requirements for the Degree Doctor of Philosophy Athens, Georgia 2016.

  5. Examples of Semantic Web Applications

    Microsoft, Google, and Yahoo use Schema.org, which has an RDFa representation. The Ecommerce Industry has GoodRelations, which also uses RDFa. These frameworks are all now actively being used to bring users a better web experience. An excellent and specific case study on this usage of Semantic Web technologies is Best Buy.

  6. Semantic Web and Linked Data: Journals, Articles and Papers

    The Semantic Web provides for the ability to semantically link relationships between Web resources, real world resources, and concepts through the use of Linked Data enabled by Resource Description Framework (RDF). RDF uses a simple subject-predicate-object statement known as a triple for its basic building block.

  7. Introduction to the Semantic Web Technologies

    1 Introduction. The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation [ 6 ]. For newcomers to the Semantic Web, the above definition taken from the article, which is often taken as the starting point for the ...

  8. Semantic Web Thesis Topics

    Semantic Web Thesis Topics - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Scribd is the world's largest social reading and publishing site.

  9. Introduction to the Semantic Web

    Introduction to the Semantic Web. The Semantic Web, Web 3.0, the Linked Data Web, the Web of Data…whatever you call it…represents the next major evolution in connecting and representing information. It enables data to be linked from a source to any other source and to be understood by computers so that they can perform increasingly ...

  10. PDF Learning Applications based on Semantic Web Technologies

    Semantic Web technology. The thesis presents seven recommendations in terms of architectures, technologies, frameworks, and type of application to focus on. ... Realizing that interest in a topic does not necessarily follow from facts or logic gave me both an interest and a hint of the

  11. PDF An Evaluation Platform for Semantic Web Technology

    Dissertation No. 1061 An Evaluation Platform for Semantic Web Technology by Cécile Åberg Department of Computer and Information Science Linköpings universitet SE-581 83 Linköping, Sweden Linköping 2007 . ISSN 0345-7524 ISBN 91-85643-31-9 Printed by LiU-Tryck in Linköping, Sweden

  12. PDF An Introduction to the Semantic Web and the Web of Data

    Interested for years in sharing data on the Web RDF/S is the only reasonable solution! International Semantic Web Conference (ISWC) Co-organizer in 2007, 2009, 2010 Vice-Chair in 2011 (Bonn) PC-Chair in 2012 (Boston) Hope to see you there! Research Topics Decentralized Data Integration RDF Storage (P2P/Cloud) Semantic DNS 4

  13. Semantic Web in Healthcare: A Systematic Literature Review of

    2. Methodology. A systematic review is a research study that looks at many publications to answer a specific research topic. This study follows such a review to examine previous research studies that include identifying, analyzing, and interpreting all accessible information relevant to the recent progress of pertinent literature on Web 3.0 or Semantic Web in medical and healthcare or our ...

  14. semantic-web · GitHub Topics · GitHub

    Semantic Web. The Semantic Web is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable. This can be seen in various aspects of web development, one being semantic HTML as a way to give your markup meaning, microformats like schema ...

  15. Dissertations / Theses on the topic 'Semantic web'

    List of dissertations / theses on the topic 'Semantic web'. Scholarly publications with full text pdf download. Related research topic ideas.

  16. PDF Bachelor Thesis Semantic web

    achieve that, both the data and its model must be presented in a format of the semantic web. The aim of this thesis is to compare expressive power of UML, Java programming language and OWL2, which is the main language for technologies based on the idea of the semantic web. Another goal is to provide unified insight into common use-cases,

  17. Semantic Web Technologies for the Internet of Things: Systematic

    Semantic Web and blockchain technologies are the fundamental building blocks of Web3 (the third version of the Internet), which aims to link data through a decentralized approach. Blockchain provides a decentralized and secure framework for users to safeguard their data and take control over their data and Web3 experiences.

  18. Semantic Search Engine Optimization in the News Media Industry

    In the shift toward what some have called Web 3.0 (Rudman & Bruwer, 2016), semantic search focuses on the meaning behind search queries instead of individual keywords, aiming at a better understanding of natural language and search intent (a human-like, semantic approach) (Pecánek, 2020b).

  19. Semantic Web Technologies for the Internet of Things: Systematic

    The major purpose of this platform is to resolve three main problems in distributed IoT domains by applying semantic technologies to IoT, such as semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. This platform is based on the IoT-based service integration ontology.

  20. Dissertations / Theses: 'Library, semantic web'

    List of dissertations / theses on the topic 'Library, semantic web'. Scholarly publications with full text pdf download. Related research topic ideas.

  21. Dissertations / Theses: 'Informatics ; semantic web'

    List of dissertations / theses on the topic 'Informatics ; semantic web'. Scholarly publications with full text pdf download. Related research topic ideas.

  22. Semantic Technologies: From Niche to The Mainstream of Web 3? a

    The proposed framework assists web information modelling, facilitates semantic annotation and information retrieval, enables system interoperability and enhances information quality, which is the thesis' original contribution to knowledge. Context: Web information technologies developed and applied in the last decade have considerably changed the way web applications operate and have ...

  23. Topics in Semantics and Pragmatics

    Topics in Semantics and Pragmatics. This course will provide a comprehensive overview of the empirical patterns, analytical challenges and broader theoretical issues surrounding a particular topic, such as information structure, presupposition, scalar implicature, binding, aspectual composition, nominal reference, and so forth.

  24. Semantic Kernel

    Evan Mattson, Eduard van Valkenburg. We are thrilled to announce the release of the long-awaited Semantic Kernel Python SDK 1.0.0! This milestone version brings a plethora of enhancements and new features designed to empower AI and app developers with more robust and versatile tools. Exciting New Feature: Shared Prompts Across Languages One of ...