AC

Posts Tagged ‘Semantic Web’

Smarter Metadata — Aiding Discovery in Next Generation E-book and E-journal Gateways

In Linked Data, Technology on March 8, 2011 at 3:13 pm

Source: Andrew Mason on flickr

From my February post on The Scholarly Kitchen —

With the recent surge in library e-book sales, serials aggregators are racing to add e-books to their platforms.ProQuest’s recent acquisition of ebrary and JSTOR’s expansion into current journals and e-books signal a shift from standalone e-book and e-journal aggregator platforms to mixed content gateways, with e-books and e-journals living cheek by jowl in the same aggregation.

Meanwhile, researchers have become accustomed to the big search engines, and have shifted from reading to skimming. As the authors of an article in the January issue of Learned Publishing, “E-journals, researchers – and the new librarians,” summarize:

Gateway services are the new librarians. . . . Reading should not be associated with the consumption of a full-text article. In fact, almost 40% of researchers said they had not read in full the last important article they consulted. . . . ‘Power browsing’ is in fact the consumption method of choice.

These changes in behavior mean that gateway vendors have to develop more sophisticated tools for organizing and surfacing content. ProQuest, OCLC, EBSCO, and others have responded by creating new tools and systems. But is it enough?

Publishers often discuss distinctions between e-book and e-journal business and access models, but the truly complex differences in e-books and e-journals reside beneath the surface, in the metadata layer. Understanding and compensating for these differences is essential for interoperable content discovery and navigation when mixed e-book and e-journal content is delivered in large-scale databases, which is increasingly the norm.

Continue reading on TSK.

Advertisements

Can New XML Technologies and the Semantic Web Deliver on Their Promises?

In Innovation, Linked Data, Technology on May 10, 2010 at 3:04 pm

Source: Petr Kratochovil http://www.publicdomainpictures.net

 

Read my complete post on The Scholarly Kitchen. Excerpt:           

There is active debate on the Web about the potential for Web 3.0 technologies and the standards that will be adopted to support them. Writing for O’Reilly Community, Kurt Cagle has remarked:           

My central problem with RDF is that it is a brilliant technology that tried to solve too big a problem too early on by establishing itself as a way of building “dynamic” ontologies. Most ontologies are ultimately dynamic, changing and shifting as the requirements for their use change, but at the same time such ontologies change relatively slowly over time.           

As of January 2009, when Cagle wrote this, RDF had failed to garner widespread support from the Web community — but it has gained significant traction during the past year, including incorporation in the Drupal 7 Core.             

The promise within this alphabet soup of technologies is that semantic Web standards will support the development of utilities that:           

  • Provide access to large repositories of information that would otherwise be unwieldy to search quickly
  • Surface relationships within complex data sets that would otherwise be obscured
  • Are highly transferable
  • Deliver democratized access to research information

But there are risks. Building sites that depend on semantic technologies and RDF XML can take longer and be more costly initially. In a stalled economy, long-term financial vision is harder to come by, but those with it may truly leapfrog. In addition, there are concerns about accuracy, authority, and security within these systems, ones the architects must address in order for them to reach the mainstream.         

… [O]ne may wonder whether this is an all-or-nothing proposition. Without speed and consistent delivery of reliable results, projects such as these may fail to meet user expectations and be dead in the water. On the flip side, if RDF XML and its successors can accomplish what they purport to, they will drive significant advances in research by providing the capacity to dynamically derive rich meaning from relationships as well as content.

Visualize This: LinkTV and Sunlight Labs Move to Put Data Into Action

In Innovation, Linked Data, Technology on March 10, 2010 at 10:00 am

Source: Socialhallucinations.com

 

Read my complete post on The Scholarly Kitchen. Excerpts:     

If they build it, will we go? That’s a question being posed by two open data exercises, one underway and another planned for later this year. Both are attempts to use information transparency to make governments more involving and accountable.        

Sunlight’s mission is to open government and “make it more transparent, accountable, and responsible.” To accomplish this, the Sunlight Labs site is a community space where staff and community programmers can share open-source code, APIs, publicly available data sets, and ideas — resulting in co-created utilities that help the organizations and the public interpret public data, often aided by mobile apps or Flash visualization technologies.