Die Kirchenkreise der #Nordkirche auf einer #Karte auf Basis der Daten in @wikidata
Hier ist die #SPARQL-Abfrage dazu: https://w.wiki/Dm8f
Die Kirchenkreise der #Nordkirche auf einer #Karte auf Basis der Daten in @wikidata
Hier ist die #SPARQL-Abfrage dazu: https://w.wiki/Dm8f
An der Akademie der Wissenschaften und der Literatur in #Mainz wurde am #girlsday in Zusammenarbeit mit #NFDI4Culture an einer Webanwendung gearbeitet, die sich "In meiner Nähe/Around me" nennt.
Es wurden Wikidata-Einträge zu Museen, Schulen und Parks zur #Georeferenzierung um einen Kartenmarker abgefragt. Dabei hatten die Mädchen Einblick in #wikidata #SPARQL -Queries, #html #CSS und #javascript Und vor allem viel Spaß!
^gp (sp)
Le laboratoire Géographie-cités propose une journée de présentation/formation sur #Wikipédia et #wikidata avec un volet pratique #Python #R #SPARQL.
> 17 juin à Paris et en visio
Inscriptions gratuites mais obligatoires :
https://site-fef6fe.gitpages.huma-num.fr/journee/wikipedia.html
New blog post! I've started taking regular snapshots of Library of Congress linked open data (as part of a much broader data rescue and monitoring project) and took the opportunity to finally get to grips with #SPARQL, #RDF and all that comes along with it. I've started small though...
On fooling around with triples
https://erambler.co.uk/blog/on-fooling-around-with-triples/
This week’s edition of my @linkedin newsletter shines a light on the impact of the Model Context Protocol (MCP) across multiple computing eras—leading right up to today.
We’ve arrived at a moment where loose coupling principles are cool (and super useful) again.
Naturally, there are some live demo nuggets in the post to help bring it all to life:
https://www.linkedin.com/pulse/model-context-protocol-bridging-generational-computing-idehen-qdtce/
The Model Context Protocol acts as a universal translator between LLMs and data sources, eliminating complex platform-specific requirements. Our new open source MCP server for ODBC (mcp-odbc-server) enables seamless integration of any ODBC-accessible data into RAG pipelines.
Read more in my latest newsletter!
https://www.linkedin.com/pulse/whats-model-context-protocol-why-important-kingsley-uyi-idehen-zy8ue
TIL: Eine #SPARQL Abfrage in #Wikidata nach aufsteigender #QID zu sortieren ist etwas mehr, als in https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples beschrieben. Statt bloß "ORDER BY ?horse" muss man schon zu "ORDER BY (xsd:long(STRAFTER(STR(?horse),"Q")))" greifen.
We’re excited to introduce the latest Virtuoso Maintenance Release—Virtuoso 08.03.3333—packed with feature enhancements and bug fixes across the platform.
https://community.openlinksw.com/t/virtuoso-08-03-3333-release-notes/4872
Voilà mes interventions d'ici la fin (et un peu au delà) de ma dernière résidence #Wikimédia au sein d'une #URFIST. https://fr.wikipedia.org/wiki/Projet:Wikifier_la_science/Nice #Wikipedia #SPARQL #OpenRefine #Wikidata #WikiCommons #LicenceLibre
Looking for some #SPARQL help, as I'm a newbie to this and my mental model of how triple stores work is incomplete.
I have two versions of the same set of terms in two Turtle files, and I want to load them up into a triple store and then compare them to see what changed: terms updated, added or deleted.
I just figured out that combing markerclusters with layers in #SPARQL queries visualising results from #Wikidata on maps can give you a nice coloured classification of periodicals by publishing language.
Unfortunately the query is too long for posting it here ...
Update: created a gist for this query at https://gist.github.com/tillgrallert/3999b5da1616107a483221bc04a420c2
At @swat4hcls there was a presentation by @albdrg that I really like: https://www.youtube.com/watch?v=F4Nl-nmLZA4 This uses #SPARQL to combine data in a privacy aware #beacons-api and combine it with public #knowledgegraph like #uniprot by @SIB and #wikidata @wikidata to reduce cost of the data integration challenge required to answer clinical question.
Voir aussi les #dataviz des requêtes #Sparql de l'identifiant #FranceArchives sur #Wikidata : https://patrimoine-et-numerique.fr/data-visualisations/83-des-gens-dans-les-archives-a-propos-de-l-identifiant-francearchives-agent-sur-wikidata#femmes-proportion
Je suis particulièrement fan (et fière) de celle sur les lieux de naissance des femmes dans les archives, par siècle (poke @belett) : https://w.wiki/7CgB
#archives #viedarchiviste #herstory #histoire #histodons #sourceshistoriques #inventaires #archivistodons #histodons #femmes #journeeinternationaledesdroitsdesfemmes #womeninhistory 2/
A #SPARQL-Request: This item https://database.factgrid.de/wiki/Item:Q1194835 has the official results of the recent German elections. You can run a query on it: https://tinyurl.com/24yzfcax - but how do you add the units? - much obliged, Olaf
Yes: @cellosaurus #SPARQL endpoint is alive and ready to shine :) Please try it out at:
#DHd2025 im #Slider : Über offenes Field Mapping können Forschende & Institutionen der #DigitalHumanities & #DigitalHistory in DACH mithilfe von #Wikidata & #SPARQL abgebildet werden:
This is a more user friendly SPARQL query editor with 111 example queries.
It offers three main features: (i) automatic query example rendering, (ii) precise autocomplete based on existing triple patterns including within SERVICE clauses, and (iii) a data-aware schema visualization.
https://sib-swiss.github.io/sparql-editor/
There is an arXiv article describing this tool at:
https://arxiv.org/abs/2503.02688