AC

Archive for the ‘Linked Data’ Category

Smarter Metadata — Aiding Discovery in Next Generation E-book and E-journal Gateways

In Linked Data, Technology on March 8, 2011 at 3:13 pm

Source: Andrew Mason on flickr

From my February post on The Scholarly Kitchen —

With the recent surge in library e-book sales, serials aggregators are racing to add e-books to their platforms.ProQuest’s recent acquisition of ebrary and JSTOR’s expansion into current journals and e-books signal a shift from standalone e-book and e-journal aggregator platforms to mixed content gateways, with e-books and e-journals living cheek by jowl in the same aggregation.

Meanwhile, researchers have become accustomed to the big search engines, and have shifted from reading to skimming. As the authors of an article in the January issue of Learned Publishing, “E-journals, researchers – and the new librarians,” summarize:

Gateway services are the new librarians. . . . Reading should not be associated with the consumption of a full-text article. In fact, almost 40% of researchers said they had not read in full the last important article they consulted. . . . ‘Power browsing’ is in fact the consumption method of choice.

These changes in behavior mean that gateway vendors have to develop more sophisticated tools for organizing and surfacing content. ProQuest, OCLC, EBSCO, and others have responded by creating new tools and systems. But is it enough?

Publishers often discuss distinctions between e-book and e-journal business and access models, but the truly complex differences in e-books and e-journals reside beneath the surface, in the metadata layer. Understanding and compensating for these differences is essential for interoperable content discovery and navigation when mixed e-book and e-journal content is delivered in large-scale databases, which is increasingly the norm.

Continue reading on TSK.

Advertisements

The DISCLOSE Act: New Media, Old Politics, and the Fight for Public Data

In Linked Data, Technology, Transparency on July 7, 2010 at 9:02 am

Source: Beth Kanter on flickr

 

Read my entire post on The Scholarly Kitchen. An excerpt:  

While the notion that information wants to be free has driven many movements around government-financed data and research, it pays to remember that covert political maneuvering and paying for influence are as old as civilization. And some of these forces don’t want information to be free.  

When some of the most well-funded corporations and interest groups also have a commercial stake in supporting transparency, you have all the ingredients for a real battle.  

Advances in networked data technologies in the new media and research sectors have made new kinds of relational analysis possible. Tim Berners-Lee’s 2009 TED Talk centers on the creation of the web of linked data—a shadow layer that will underlie the web of content, the principal vehicle of global information exchange with which we are all familiar today.  

Networked data is intrinsic to the semantic web and to data visualization, which propose alternate ways to  describe, associate meaning with, and reveal relationships between data entities. Early examples, built from publicly available government data, can be found on project pages from Open PSI (in the UK) and Sunlight Labs (in the US).  

The power of analysis that can be derived from the semantic Web and visualizations of linked data relies entirely upon the accuracy and scope of the data itself—which is where the DISCLOSE Act (Democracy Is Strengthened by Casting Light On Spending in Elections) comes in.  

Read more.

Data.gov: Selling the Government and Democratization of Information

In Internet Business Models, Linked Data on May 29, 2010 at 8:03 am

Read my complete post on The Scholarly Kitchen. Excerpt:               

Last Friday marked the one-year anniversary of the Obama Administration’s Open Government Initiative (OGI). The occasion was honored with a cupcake and candle on the landing page of the newly re-designed Data.gov site and a widely disseminated announcement from the White House.             

Source: Laura Padgett on flickr

   

For global publishers who have generated a significant portion of revenue building and selling databases, a requirement to make their data freely available is a mixed blessing. Despite the fact that global access and use of the data are expected to rise exponentially, balance sheets will take a hit.               

Databases are not just part of a publishers portfolio, if done right they can be the most profitable part and have sometimes carried the less profitable and declining parts of the publishing line up — namely, books. Presses being impacted by this change must quickly seek new ways to recapture publishing expense and reinvent the services they provide.               

Conversely, if a business has retooled to conceive of and build data services, it’s a golden egg.  For publishers in adjacent spaces — CQ Press,  Bloomberg, LexisNexis, Thomson Reuters, National Journal, CQ-Roll Call, the Washington Post — access to troves of free, authoritative, updated data presents a significant opportunity to create new revenue streams by developing bespoke products and services that monetize free content.               

Read more.

Can New XML Technologies and the Semantic Web Deliver on Their Promises?

In Innovation, Linked Data, Technology on May 10, 2010 at 3:04 pm

Source: Petr Kratochovil http://www.publicdomainpictures.net

 

Read my complete post on The Scholarly Kitchen. Excerpt:           

There is active debate on the Web about the potential for Web 3.0 technologies and the standards that will be adopted to support them. Writing for O’Reilly Community, Kurt Cagle has remarked:           

My central problem with RDF is that it is a brilliant technology that tried to solve too big a problem too early on by establishing itself as a way of building “dynamic” ontologies. Most ontologies are ultimately dynamic, changing and shifting as the requirements for their use change, but at the same time such ontologies change relatively slowly over time.           

As of January 2009, when Cagle wrote this, RDF had failed to garner widespread support from the Web community — but it has gained significant traction during the past year, including incorporation in the Drupal 7 Core.             

The promise within this alphabet soup of technologies is that semantic Web standards will support the development of utilities that:           

  • Provide access to large repositories of information that would otherwise be unwieldy to search quickly
  • Surface relationships within complex data sets that would otherwise be obscured
  • Are highly transferable
  • Deliver democratized access to research information

But there are risks. Building sites that depend on semantic technologies and RDF XML can take longer and be more costly initially. In a stalled economy, long-term financial vision is harder to come by, but those with it may truly leapfrog. In addition, there are concerns about accuracy, authority, and security within these systems, ones the architects must address in order for them to reach the mainstream.         

… [O]ne may wonder whether this is an all-or-nothing proposition. Without speed and consistent delivery of reliable results, projects such as these may fail to meet user expectations and be dead in the water. On the flip side, if RDF XML and its successors can accomplish what they purport to, they will drive significant advances in research by providing the capacity to dynamically derive rich meaning from relationships as well as content.

Visualize This: LinkTV and Sunlight Labs Move to Put Data Into Action

In Innovation, Linked Data, Technology on March 10, 2010 at 10:00 am

Source: Socialhallucinations.com

 

Read my complete post on The Scholarly Kitchen. Excerpts:     

If they build it, will we go? That’s a question being posed by two open data exercises, one underway and another planned for later this year. Both are attempts to use information transparency to make governments more involving and accountable.        

Sunlight’s mission is to open government and “make it more transparent, accountable, and responsible.” To accomplish this, the Sunlight Labs site is a community space where staff and community programmers can share open-source code, APIs, publicly available data sets, and ideas — resulting in co-created utilities that help the organizations and the public interpret public data, often aided by mobile apps or Flash visualization technologies.

Reblogging: How Do We Understand Our Information?

In Linked Data on March 6, 2010 at 4:00 am

Image by murdocke23 via flickr

 

From the ground level, when a business or organization wants to understand the impact of its communications, blogs and social networks included, traditional forms of reporting no longer seem to be able to adequately describe the more dynamic flow of information. Read more …