AC

Archive for the ‘Technology’ Category

Reference Content for Mobile Devices: Free the Facts from the Format

In Technology on March 15, 2011 at 5:10 pm

Source: William Hook on flickr

An excerpt from the article that David Wojick and I have written for E-Reference Context and Discoverability in Libraries: Issues and Concepts, which will be published by IGI Global and edited by Sue Polanka, head of reference and instruction at the Wright State University Libraries:

The rapid rise of mobile devices presents reference content providers with a grand challenge. Traditional content designs, especially web pages, simply do not work on the tiny screens of mobile devices. The typical computer screen is 50 or more times larger than the typical mobile device screen. From 200 to 300 square inches for the computer, compared to just four to six square inches for the mobile machine. As a result, traditional web-based content designs are virtually unreadable on the mobile screen. The solution is to radically restructure content, presenting it in a way that breaks it down into tiny pieces and frees the facts from the format. But the organization will not be on the screen as format, as it typically is with web pages. Instead, the key to effective presentation of factual material will be in the linkages among the tiny pages, of which a great many will be required.

An abridged version of the article appears in the first issue of the Advances in Library and Information Science (ALIS) Newsletter.

[About the authors: Alix Vance owns Architrave Consulting and is Chief Operating Officer at The Center for Education Reform. David E. Wojick is Senior Consultant for Innovation at the Office of Scientific and Technical Information (OSTI) of the U.S. Department of Energy. OSTI operates several of the world’s largest technical reference portals, including www.science.gov and www.worldwidescience.org.]

Advertisements

Smarter Metadata — Aiding Discovery in Next Generation E-book and E-journal Gateways

In Linked Data, Technology on March 8, 2011 at 3:13 pm

Source: Andrew Mason on flickr

From my February post on The Scholarly Kitchen —

With the recent surge in library e-book sales, serials aggregators are racing to add e-books to their platforms.ProQuest’s recent acquisition of ebrary and JSTOR’s expansion into current journals and e-books signal a shift from standalone e-book and e-journal aggregator platforms to mixed content gateways, with e-books and e-journals living cheek by jowl in the same aggregation.

Meanwhile, researchers have become accustomed to the big search engines, and have shifted from reading to skimming. As the authors of an article in the January issue of Learned Publishing, “E-journals, researchers – and the new librarians,” summarize:

Gateway services are the new librarians. . . . Reading should not be associated with the consumption of a full-text article. In fact, almost 40% of researchers said they had not read in full the last important article they consulted. . . . ‘Power browsing’ is in fact the consumption method of choice.

These changes in behavior mean that gateway vendors have to develop more sophisticated tools for organizing and surfacing content. ProQuest, OCLC, EBSCO, and others have responded by creating new tools and systems. But is it enough?

Publishers often discuss distinctions between e-book and e-journal business and access models, but the truly complex differences in e-books and e-journals reside beneath the surface, in the metadata layer. Understanding and compensating for these differences is essential for interoperable content discovery and navigation when mixed e-book and e-journal content is delivered in large-scale databases, which is increasingly the norm.

Continue reading on TSK.

The DISCLOSE Act: New Media, Old Politics, and the Fight for Public Data

In Linked Data, Technology, Transparency on July 7, 2010 at 9:02 am

Source: Beth Kanter on flickr

 

Read my entire post on The Scholarly Kitchen. An excerpt:  

While the notion that information wants to be free has driven many movements around government-financed data and research, it pays to remember that covert political maneuvering and paying for influence are as old as civilization. And some of these forces don’t want information to be free.  

When some of the most well-funded corporations and interest groups also have a commercial stake in supporting transparency, you have all the ingredients for a real battle.  

Advances in networked data technologies in the new media and research sectors have made new kinds of relational analysis possible. Tim Berners-Lee’s 2009 TED Talk centers on the creation of the web of linked data—a shadow layer that will underlie the web of content, the principal vehicle of global information exchange with which we are all familiar today.  

Networked data is intrinsic to the semantic web and to data visualization, which propose alternate ways to  describe, associate meaning with, and reveal relationships between data entities. Early examples, built from publicly available government data, can be found on project pages from Open PSI (in the UK) and Sunlight Labs (in the US).  

The power of analysis that can be derived from the semantic Web and visualizations of linked data relies entirely upon the accuracy and scope of the data itself—which is where the DISCLOSE Act (Democracy Is Strengthened by Casting Light On Spending in Elections) comes in.  

Read more.

2010 Digital Trends and Topics: Mobile Device Applications for Reference Information

In Internet Business Models, Services, Technology on June 18, 2010 at 9:02 am

Source: myuibe on flickr

 

 A link to PDF/PPT slides I presented earlier this month at the SSP Annual Meeting in San Francisco:          

Mobile_Reference_Vance_2010    

Includes wireframes from Cultured Code and app examples from Culinate/Wiley, NASA, Zinio, Amazon, the World Bank, and Shazam plus reporting on trends, innovations, and open questions.

Serious Games, Science Communication, and One Utopian Vision

In Innovation, Technology on June 9, 2010 at 12:52 pm

"enercities" by centralasian on flickr

 

Read my complete post on The Scholarly Kitchen. Excerpt:  

Even for mainstream students, gaming is a ubiquitous, informal learning vehicle. From a January piece in the New York Times, “If Your Kids Are Awake, They’re Probably Online,” the average time per day spent by people ages 8-18 gaming is one hour and thirteen minutes compared to 38 minutes per day spent using print.  

Dr. Michael Rich, a pediatrician at Children’s Hospital Boston who directs the Center on Media and Child Health, said that with media use so ubiquitous, it was time to stop arguing over whether it was good or bad and accept it as part of children’s environment, “like the air they breathe, the water they drink and the food they eat”.  

Over the course of the next 15 years, this community of users who experience content versus strictly reading it will comprise the community of scientists, researchers, and society members who are our customers. It may be difficult for traditionalists to make the conceptual leap from journal or book publishing to scientific simulations and instructional gaming. However, as economics and culture align, these will become part of the fabric of the industry.  

Not everyone will thrive in a transformed business landscape. For centuries, scientific publishers have been scribes and disseminators of content who have translated the activity of science into a linear, replicable, two-dimensional experience. Sometimes even the most accomplished companies can’t transition outside their core specialties. (Apple, for example, is an exemplary device manufacturer and marketing company that has been comparatively ineffective in the software space. Microsoft, conversely, has excelled in software but failed to make headway in devices.)  

Is it better, then, for publishers to focus on the curation and filtering of content, leaving user services development to others? Or should they be cultivating new skills that prepare them for a different future?  

Read more.

The Digital Universe, Information Shadows, and Paying for Privacy

In Innovation, Privacy, Technology on May 17, 2010 at 2:08 pm

"The Shadow Knows" by GregStruction on Google Images

 

Read my complete post on The Scholarly Kitchen. Excerpt:  

Everywhere we turn, we encounter debates over the risks and legality of uses of “private” data by social media mega-businesses like Facebook and Twitter.  

Google is the latest culprit to be caught in the spotlight.  

The lead technology piece in Saturday’s New York Times zeroed in on Google’s violation of German privacy laws, in connection with the company’s admission that it had systematically harvested private data from households in Europe and the US since 2006 — including email content and websites visited — in the course of capturing drive-by images for Google’s Street View photo archive.  

There are already books to teach Internet privacy “survival skills” and software downloads to “erase” your data  footprint. It won’t be surprising to find that some are willing to pay generously for services that sanitize their information shadows with virtual lye and steel wool. Privacy will be a scare commodity, and its market value will rise. When privacy becomes monetized, we may assign relative values to our own private information according to the type of information that is protected or made available.  

While papers have touched on the potentially inverse relationship that exists between user privacy and the efficacy of Web 2.0 social ranking and recommendation engines, social media engines are only the beginning of what is to come …  

Read more.

Can New XML Technologies and the Semantic Web Deliver on Their Promises?

In Innovation, Linked Data, Technology on May 10, 2010 at 3:04 pm

Source: Petr Kratochovil http://www.publicdomainpictures.net

 

Read my complete post on The Scholarly Kitchen. Excerpt:           

There is active debate on the Web about the potential for Web 3.0 technologies and the standards that will be adopted to support them. Writing for O’Reilly Community, Kurt Cagle has remarked:           

My central problem with RDF is that it is a brilliant technology that tried to solve too big a problem too early on by establishing itself as a way of building “dynamic” ontologies. Most ontologies are ultimately dynamic, changing and shifting as the requirements for their use change, but at the same time such ontologies change relatively slowly over time.           

As of January 2009, when Cagle wrote this, RDF had failed to garner widespread support from the Web community — but it has gained significant traction during the past year, including incorporation in the Drupal 7 Core.             

The promise within this alphabet soup of technologies is that semantic Web standards will support the development of utilities that:           

  • Provide access to large repositories of information that would otherwise be unwieldy to search quickly
  • Surface relationships within complex data sets that would otherwise be obscured
  • Are highly transferable
  • Deliver democratized access to research information

But there are risks. Building sites that depend on semantic technologies and RDF XML can take longer and be more costly initially. In a stalled economy, long-term financial vision is harder to come by, but those with it may truly leapfrog. In addition, there are concerns about accuracy, authority, and security within these systems, ones the architects must address in order for them to reach the mainstream.         

… [O]ne may wonder whether this is an all-or-nothing proposition. Without speed and consistent delivery of reliable results, projects such as these may fail to meet user expectations and be dead in the water. On the flip side, if RDF XML and its successors can accomplish what they purport to, they will drive significant advances in research by providing the capacity to dynamically derive rich meaning from relationships as well as content.

A Future of Touch and Gestures: New Interfaces Driving Scientific Information Presentation

In Innovation, Technology on May 5, 2010 at 8:49 pm

Read my complete post on The Scholarly Kitchen. Excerpts:     

The variety of app-delivered games and tools currently available offers a representative taste of current capabilities in graphics manipulation and uses for interactively received inputs. The true potential for multitouch technology is still in its nascent stage.   

Displax and Archimedes Solutions are two companies seeking to seize the opportunities offered in this emerging area.   

Source: naturalinteraction.org by liquene on Flickr

 

As these technologies continue to improve, they will significantly alter the ways we work with and experience information, including images and data. We will increasingly transition from environments governed by the restrictions of mice and keyboards to more fluid and interactive environments — in the vein of Wii, iPad, and iPhone — that support a more fluid, intuitive, and experiential exploration of scientific and non-scientific content and media.   

While timelines are uncertain, expect that consumers of our information will include traditionalists/linear thinkers and visual/experiential thinkers, all of whom will increasingly require that we meet them “where they are” by providing a suite of mechanisms for interacting with content of various types.

Mobile Devices and Privacy — Why It’s So Easy to Swap Personal Information to Satisfy an Itch

In Innovation, Privacy, Technology on April 20, 2010 at 8:39 am

Source: Alan Cleaver on Flickr

 

Read my complete post on The Scholarly Kitchen. Excerpts:    

What customers in all business areas increasingly require is customized, immediate information, which often involves transparency about  personal information of one sort or another.  

Privacy concerns can be raised in many settings, but often they are ultimately trumped and compromised by some pressing need or wish.  

When I’m stuck in traffic and am dying to know how to get out, I’m more than game for enabling geolocation.  

I can even hope others will drop their privacy screens.  

If I’ve been trying to corner this lobbyist or congresswoman for weeks, I want to know immediately when she comes into the Starbucks down the street.  

If I’m itching to try that no-reservations restaurant, I want to log in to see the real-time video showing who’s there and what the line looks like.  

One may eschew Twitter or Buzz strictly as navel-gazing technologies, but there are very real business utilities that can be derived from them — especially given the newly more diverse options for immediate access via any combination of devices.  

More about marketing via Twitter and mobile-enabled research.

The Scholarly Kitchen Has Been Nominated for a Webby Award

In Awards, Technology on April 13, 2010 at 8:33 pm

Source: Wikipedia

 

The Scholarly Kitchen, a blog that I co-author, has been nominated for a Webby Award. Out of dozens or hundreds of entrants, we’ve been shortlisted among the Top 5 Business Blogs. Other nominees in the category include:   

Now, you can help us out by voting.   

  • Visit The Scholarly Kitchen or go directly to the Webby Awards site to register to vote for your preferred nominee.
  • Complete the brief registration form, wait for a confirmation email (check your junk mail folder if not received), and click-through to get back and cast your vote.
  • If you have trouble finding us, you can use the drop-down “Categories” menu to find “Business – Blog” or type “Scholarly Kitchen” in the general search box for the site. Either will take you directly to the voting button in the Best Business Blogs category.

My fellow “chefs” and I are spreading the word (shamelessly) to win more votes for this independent blog, which was  established by The Society for Scholarly Publishing in 2008.   

Please spread the word to friends and colleagues!  

Best wishes, Alix