Archive for the ‘Innovation’ Category

Throwing Down the Gauntlet: Moving from Ideas to Action

In Innovation, Society for Scholarly Publishing on May 27, 2011 at 7:00 am

Image by Robyn Jay on flickr

Everyone wants to find new ways to compete while they also keep the machine oiled and running. The planners of the 2011 SSP IN Meeting have been wrangling with this duality between big ideas and practical requirements for weeks.

A recent story in Information Week pinpoints the need for executives to have the skills to evaluate, prioritize, and sell ideas in order to take them from the drawing board to the market.

Companies are attempting to codify the processes through which innovation can be nurtured. More important than ideas, which quite frankly are cheap, is the ability to pick which concepts are worthy of the heavy investment of time, money, and corporate mindshare required to take them to productization. — Alexander Wolfe in “Top 5 Tech Trends for 2011″

But keeping things running is what’s keeping people up at night as well, as a colleague found out when she spoke with a handful of society publishing directors recently (under conditions of anonymity). Their concerns are:

  • Competition on price and availability: New ways to deliver content, which are taking shape on the web, threaten publishers because they compete, not on quality, but on price. Our journals are costly to produce, the subscription model is threatened, and it getting more challenging to compete and retain market share with more inexpensive, “good enough” content.
  • Pressure from consolidation: Users are getting irritated with all the interfaces. Suffering from information overload, our users are saying that they prefer fewer, better, go-to resources. How will smaller publishers compete with the behemoth databases, especially in an environment of acquisition and consolidation? Is PubMed Central be the model for the future? No journals, no branding, just data-based information?
  • Workload and resource demands: The workload of accepted papers is increasing but publications revenues cannot keep up with increasing demands for services and programs by other areas of the society that either do not produce revenues or are not self-supporting.
  • Changing membership: Making our publications program more relevant to new members, who may not have terminal degrees, without devaluing journals for academic authors and reviewers.
  • Journals cuts: Our discipline is facing challenges — many departments have been closed or merged with departments for other disciplines.  In some places, the subject is being taught by people without a discipline-specific background. There are scientists doing pharmacological research who identify with larger practice areas, not sub-specialties. With fewer people to speak up in defense of journals, it has become easier for them to go on the chopping block. Also, how to compete with big package deals — journals are significantly less expensive than their commercially published competitors but are easier to cut than larger packages. Most, if not all, consortia will not bother with small numbers of journals, so we get squeezed out of that market.
  • Keeping pace with technology: Semantic tagging is important, but I have neither the money nor the time to implement it.  My editorial boards are seeing new technologies before I do — and want to know when I’m going to adopt them.  The pace of change seems to be quickening. Staying informed is a challenge and arriving at ways to implement technologies is more difficult. At the same time, my resources are shrinking.
  • The squeeze: What if subscription sales decline and scientists’ research grants can’t support publication charges?
  • The bottom line: What keeps me up at night? The need for more sustainable business models.

SSP members were also recently asked to vote for three strategic issues (of eight identified by the SSP Board of Directors) that they felt would most wouldsignificantly impact them — and about which the society is positioned to take constructive action. What they flagged:

  1. User expectations that they can get information in variable shapes, sizes, and prices (especially free) challenge existing publisher/librarian roles and business models (128 votes)
  2. New products and technology require new skill sets from employees, straining traditional career progression and job descriptions, and requiring constant revamping and retraining (123 votes)
  3. Publishers’ increasing reliance on multiple, unstable revenue streams places a premium on business agility, adaptability, and collaborative partnerships (104 votes)

IN Meeting organizers have incorporated feedback from SSP members about their strategic priorities throughout the planning process. The dual purpose of this 2.5-day meeting is to give attendees new ideas and experiences and to help them translate what they’ve heard into practical, needs-focused actions — as moderator Mary Waltham has put it, steps they can take “within the first 10 days back in the office.”

Continue reading on The Scholarly Kitchen.


Innovation and Longevity in Digital Publishing: Surfing the S-Curve

In Innovation on March 22, 2011 at 12:37 pm

Source: Cogdogblog on flickr

Some scholars — including Clay Christensen, author of “The Innovator’s Dilemma” and “The Innovator’s Solution” — argue persuasively that the disruption necessary to create viable innovations must come from outside an industry’s traditional ecosystem. This was elaborated upon recently in an interview with Soomo Publishing’s CEO David Lindrum:

Christensen helped us understand why, in 15 years of trying, we failed to get traditional publishers to build these [new] kinds of resources. . . . Everything in traditional publishing is built around the book, from how the market is analyzed, to the range of features considered and the process of product creation all the way down to how the rep learns a product and makes a call. Every process, metric, and assumption is built around print. . . . [I]f Christensen’s model from Innovator’s Dilemma holds up in this market, the new products must come from outsider organizations and will flourish first in fields that traditional publishers see as low-margin and undesirable.

Publishers, who often struggle with innovation and experimentation, might benefit from roadmaps for publishing innovation which encompass concept development, business modeling, market readiness, and audience targeting. For example, David Wojick and I recently collaborated on an article recently, “Reference Content for Mobile Devices: Free the Facts from the Format,” that steps through the initial challenges of transitioning content from websites to mobile devices.

After translating theory to viable business models, the next elephant in the room is consumer readiness. A body of literature has been produced since the 1950s about the technology adoption curve. The diffusion of innovations theory, summarized below, was published by Everett Rogers in 1962. From Wikipedia:

Technology adoption typically occurs in an S curve . . . [d]iffusion of innovations theory, pioneered by Everett Rogers, posits that people have different levels of readiness for adopting innovations and that the characteristics of a product affect overall adoption. Rogers classified individuals into five groups: innovators, early adopters, early majority, late majority, and laggards. In terms of the S curve, innovators occupy 2.5%, early adopters 13.5%, early majority 34%, late majority 34%, and laggards 16%.

Consider the recent tipping point for e-books. According to the diffusion of innovations theory, the market for e-books is transitioning from early adopters to early majority. How long have we been waiting for this to hit?

Many in our industry recall visiting the well-funded NetLibrary campus in Denver during the late 1990s, followed by the company’s crash and subsequent reboots under new owners. Competitors — like ebrary, EookLibrary, and Knovel — entered the market around 2004 and engaged the innovators and early adopters in our community. But, it wasn’t until 2009-2010 that e-books gained a significant measure of commercial traction. We’re now seeing the acquisitions and consolidations that denote a maturing market, ushered in by Amazon, Apple, and Google.

Hybrid cars were viable before consumers were ready for them. Text messaging emerged in the early 1990s and has taken decades to become “state of the art.” Patron-driven access (PDA) models have been available for several years but have only now entered our mainstream conversation.

Trends and ideas spark around us all the time. Some gain early acceptance, seemingly level off, and then burst on to the mainstream scene years later. Having insight into innovation adoption theories will help us gauge how to best work market levers in order to establish new products and win over larger markets.

Randy Elrod, an artist and author, writes about three types of audience influencers and their impact on innovation adoption in “How to Diffuse Ideas and Influence People:”

(1) Opinion leadership is the degree to which an individual is able to influence informally other individual’s attitudes or overt behavior in a desired way with relative frequency. (2) A change agent is an individual who attempts to influence client’s innovation-decisions in a direction that is deemed desirable by a change agency. (3) An aide is a less than fully professional change agent who intensively contacts clients to influence their innovation-decisions.

Generally, the fastest rate adoption of an innovation results from influencing the innovative influencers’ decisions. As Don Henley of the band Eagles fame once stated during an interview when asked how it feels to be so famous and have his songs permeate society, he replied, “It’s not the fame, it’s the ripple effect I’m hoping for”.

It’s precisely this ripple effect we are seeking through our efforts to re-invent the digital publishing business. We need to achieve substantial commercial success to offset the loss of formerly stable revenue streams. We can give ourselves advantages by acquiring new skills that help us — plan for distinct scenarios, take the temperature of the marketplace, renovate our ecosystems, and prime our audiences for new offerings.

Continue reading on The Scholarly Kitchen.

The Scholarly Kitchen’s Authors Revisit 2010

In Innovation on December 24, 2010 at 8:00 am

Source: hlkljgk on flickr

To recognize the holidays, Kent Anderson, Editor-in-Chief of The Scholarly Kitchen, recently invited each of the blog’s co-authors to revisit and expand upon one of their own 2010 posts. Mine appears on TSK here and is excerpted below.

Rather than choosing a “best of” my own posts, I’ve taken a step back to examine what I’ve written this year.

Main themes were innovation and new product creation — what’s next, who’s doing it, and how we get from here to there. I believe that thinking broadly about and seizing new opportunities will help us transform our business and thrive in the post-apocalyptic era of digital publishing.

With that in mind, I’ve chosen to expand on my most recent post, Higher Education: Turning a Painful Reality Into a Thriving Digital Business. Why? I’m listening to comments. More than one reader was unclear about my hypothesis/connection between virtual learning and scholarly publishing. I still believe that the connection exists and is potentially valuable. If I’m not all wet, a recent announcement in K-12 virtual learning may laterally impact digital scholarly publishing. I want us to be aware of activities in adjacent arenas in order to extract bits that help us anticipate new market needs and develop new, successful, and scalable digital products.

In Higher Education: Turning a Painful Reality Into a Thriving Digital Business, I provided an example in which a problem/opportunity, arising within the traditional knowledge-services economy, is solved by an entrepreneurial company that is better positioned to think and act innovatively than its predecessor, a traditional business that has long dominated the marketplace. The article focused on emerging alternatives in virtual learning — why they are needed, and who stands to gain. A couple of readers asked why this is relevant.

Answer: Scholarly publishers are in the knowledge business. If we want to remain in business, we need to carve out roles for ourselves in the new, transformed knowledge economy. Changes to the mechanisms through which learning is acquired will have a lateral impact. And, we can begin preparing for this now. Longstanding market definitions are breaking down. Rather than conceiving of ourselves strictly as participants in the publishing, higher education, or research sectors, it may be more useful to look at our activities in a larger context. Research, authorship, curation, teaching, and content sharing are all part of the fabric of an increasingly dynamic and scalable knowledge services continuum that is drawn together by commerce and scale.

Staying with this train of thought, it becomes clear that sea changes in adjacent knowledge economies will also influence on the ways in which we do business. The pace of change is opening up any number of new opportunities, but capitalizing on these will require us to grab onto early clues, anticipate trends, and expand the boundaries of our product development.

For example: Earlier this month, former Governors Jeb Bush and Bob Wise announced the formation of a bipartisan Digital Learning Council composed of more than 50 leaders from education, government, philanthropy, business, technology, and think tanks assembled to develop a roadmap for supporting innovation and progress in K-12 digital education. Members of the Council include executives from leading technology firms—Apple, Cisco, Dell, Intel, Google, and Microsoft, and from education and publishing, BlackBoard, Houghton Mifflin, Pearson, Scholastic, and Sylvan Learning.

One of the Council’s first public activities has been to publish a paper on the “10 Elements of High Quality Digital Learning” that will serve as a structure for action in the education policy sphere.

10 Elements of High Quality Digital Learning

  • Student Eligibility: All students are digital learners.
  • Student Access: All students have access to high quality digital content and online courses.
  • Personalized Learning: All students can customize their education using digital content through approved provider.
  • Advancement: Student progress based on demonstrated competency.
  • Content: Digital content, instructional materials, and online and blended learning courses are high quality.
  • Instruction: Digital instruction and teachers are high quality.
  • Providers: All students have access to multiple high quality providers.
  • Assessment and Accountability: Student learning is the metric for evaluating the quality of content and instruction.
  • Funding: Funding creates incentives for performance, options and innovation.
  • Delivery: Infrastructure supports digital learning.

The Digital Learning Council will focus on removing barriers that inhibit innovation in K-12 digital education. However, make no mistake; its constituents have the ability to influence the entire educational technology landscape. Reasons to anticipate that their efforts will ultimately impact the course of digital scholarly publishing:

  • Significant transformation in digital learning and educational technology will not be confined to or defined by traditional market boundaries—K-12, two-year, four-year, graduate, professional.
  • Scalable technologies and commercial incentives virtually guarantee that what is embraced in one educational arena will quickly translate to the other.
  • Support for customized, technology-centered learning will advance the obsolescence of the traditional textbook model that many publishers have taken great pains to defend.
  • To the degree to which textbook publishers do not join the virtual learning movement, there will be new opportunities for scholarly publishers to downstream their content to virtual learning applications, in partnership with course delivery companies or open access platforms.

Connecting the dots, if I am a journal, reference, or textbook publisher: I am concerned about competition and the diminution of traditional revenue streams. I am seeking ways to diversify my activities, value, and income. I have considered/will consider adapting my content for delivery in instructional settings and/or developing teaching/assessment tools for secondary through post-graduate education. I should be learning more about companies that are already active in this space, like Wimba/Elluminate. [A 2009 list of 101 OpenCourseWare projects is available on The .Edu Toolbox.]

The biggest brands in the course adoption space cross boundaries. They don’t specialize in K-12 to the exclusion of higher education but work in both. They are attuned to trends in digital learning globally and domestic education policy. It’s probably not too early for us to be thinking broadly as well.

Borrowing from the “Wild West” cliché:

  • The Good: opportunities abound
  • The Bad: change is rarely comfortable
  • The Ugly: what results if we defend our forts and fail to adapt

There is plenty that we can do to create our own futures and ensure future success. To begin …

  • Scan the horizon
  • Be alert to clues and opportunities that help define strategic opportunities
  • Free ourselves from traditional value, role, and sector definitions
  • Re-tool, develop more agile and flexible capabilities, explore new partnerships and business models

Happy holidays! – Alix

Higher Education: Turning a Painful Reality Into a Thriving Digital Business

In Innovation, Internet Business Models on November 1, 2010 at 8:00 am
The Chronicle of Higher Education recently released an interactive tool, Tuition Over Time, 1999-2010, which utilizes data from the annual “Trends in College Pricing” reports from the College Board and allows users to compare tuitions and fees on an institution-by-institution basis back to 1999.

From the introduction to the College Board’s 2010 report:

The recession has pushed large numbers of people who would otherwise be working full-time at secure jobs into postsecondary education. . . . Trends in College Pricing 2010 describes the unwelcome increases in published college prices these circumstances have generated and adds the more encouraging information about how much students actually pay after considering increases in available grant aid.

Image by Fibonacci Blue on flickr

The rise in prices is uncontested, and the National Center for Public Policy and Higher Education (NCPPHE) argues that, despite increases in financial aid, affordability of higher education is now in decline. According to Patrick M. Callan, NCPPHE President:

Student financial assistance from all these sources has increased to $45 billion, or an increase of 140% since 1991. But these increases have not been large enough to keep pace with the increased costs of college attendance, particularly not with tuition. . . . Between 1991 and 2005 Federal Pell Grant funding increased by 84%. But the average Pell Grant currently covers only 48% of tuition at these institutions, a decline in purchasing power despite increased federal investment.

Notwithstanding the complex socio-economic and institutional challenges this raises, the situation can be summarized in simpler terms through a business lens:

Increased social need for access to high-quality post-secondary education to support social well-being and global competitiveness + Declining affordability which further limits access to education and achievement, particularly for low-income populations = opportunity

Community colleges can play an important role in unlocking this opportunity, along with for-profit partners.

On October 5, the White House held its first “Summit on Community Colleges,” led by Dr. Jill Biden — wife of Vice President Joseph Biden and a community college instructor for 17 years. Opening remarks came from President Obama, who described his plan to foster an additional 5 million community college graduates by 2020 and emphasized the role that two-year institutions can play in developing the U.S. work force of the future. The President also introduced “Completion by Design,” a competitive grant program funded by the Bill & Melinda Gates Foundation designed to improve community college graduation rates by making a five-year, $35 million investment in multi-campus community college systems in nine target states with large low-income populations (Arizona, California, Florida, Georgia, Ohio, New York, North Carolina, Texas, and Washington).

Financial strategies for using community colleges for costs savings and as a stepping-stone through which to earn a four-year degree are well established. From a 2008 piece in the Community College Review entitled, “Save $80K by First Attending Community College“:

Families are turning towards the financially savvy decision of starting on the higher education path first at a two-year community college. Many universities, both public and private, have articulation agreements with local community colleges. Therefore, attending a community college for two years before transferring to a four-year institution can save significant amounts of money.

This bricks-and-mortar strategy gains further traction in the hands of digital entrepreneurs. Schools for online learning have adapted this concept by building out national networks that connect associates programs — which benefit from flexibility, geographical range, and cost efficiency in a digital environment — to four-year completion tracks in students’ locations, with pre-negotiated acceptance for those perform to acceptance criteria.

Read the rest of this post on The Scholarly Kitchen.

Leading Your Content to the Money — A New Equation for Selling Content to Consumers

In Innovation, Internet Business Models, Social Media on August 4, 2010 at 7:00 am

The notion that information wants to be free is absurd when the delivery mechanism is making a fortune and the creators are getting what amounts to zilch. – Peter Osnos, “Will Google Save the News?

Monopoly by Mikael Miettinen on flickr

In order to focus their attention on big institutional content deals, publishers have traditionally relied on third-party service providers (agents and the like) to conduct business with individual end-users. However, with institutional budgets in decline, content providers are turning their attention to consumer markets as a potential source of business growth. Asserting themselves in the consumer space will require a new type of sales and marketing acumen and visibility into consumer behavior, which recognizes and responds to the many new ways that consumers are seeking to interact with vendors and each other in online environments.

The longstanding business equation in B2B publishing has been:

Quality Content + Brand Recognition + Operational Efficiency + Institutional Usage = Market Share/Financial Success

Publishers have negotiated big deals but have largely let consumers fend for themselves. This strategy will not fly in consumer markets, where visibility and demand are the primary drivers of revenue, and where methods for marketing to consumers have changed dramatically. The best approach for publishers wishing to enter the consumer marketplace is to take a step back, free themselves from preconceptions of what their business is about, and take a look at what is really working in the consumer Web. Only through entrepreneurial thinking will they have a shot at success in consumer content markets.

Consumers are constantly inundated with free content but are rapidly flocking to demand-based, interactive services and are making freemium purchases in that context. Content providers can meet this reality head-on by wrapping content in value-added service layers that address consumer needs and support collaboration …

Continue reading this post on The Scholarly Kitchen.

Serious Games, Science Communication, and One Utopian Vision

In Innovation, Technology on June 9, 2010 at 12:52 pm

"enercities" by centralasian on flickr


Read my complete post on The Scholarly Kitchen. Excerpt:  

Even for mainstream students, gaming is a ubiquitous, informal learning vehicle. From a January piece in the New York Times, “If Your Kids Are Awake, They’re Probably Online,” the average time per day spent by people ages 8-18 gaming is one hour and thirteen minutes compared to 38 minutes per day spent using print.  

Dr. Michael Rich, a pediatrician at Children’s Hospital Boston who directs the Center on Media and Child Health, said that with media use so ubiquitous, it was time to stop arguing over whether it was good or bad and accept it as part of children’s environment, “like the air they breathe, the water they drink and the food they eat”.  

Over the course of the next 15 years, this community of users who experience content versus strictly reading it will comprise the community of scientists, researchers, and society members who are our customers. It may be difficult for traditionalists to make the conceptual leap from journal or book publishing to scientific simulations and instructional gaming. However, as economics and culture align, these will become part of the fabric of the industry.  

Not everyone will thrive in a transformed business landscape. For centuries, scientific publishers have been scribes and disseminators of content who have translated the activity of science into a linear, replicable, two-dimensional experience. Sometimes even the most accomplished companies can’t transition outside their core specialties. (Apple, for example, is an exemplary device manufacturer and marketing company that has been comparatively ineffective in the software space. Microsoft, conversely, has excelled in software but failed to make headway in devices.)  

Is it better, then, for publishers to focus on the curation and filtering of content, leaving user services development to others? Or should they be cultivating new skills that prepare them for a different future?  

Read more.

Web 2.0 Next: Companies Place Bets on Consumer Relationships and Collaboration

In Innovation, Social Media on June 2, 2010 at 1:45 pm

Read my complete post on The Scholarly Kitchen. Excerpt:   

A survey of the Web 2.o Next landscape reveals two principal directions in the forthcoming evolutionary cycle — one towards value-added business service and the other supporting more fluid online collaboration by community groups and work teams.   

This post provides a quick rundown of emerging businesses (in and adjacent to scholarly publishing), which are gearing up to generate better service and more collaborative utilities based on social platforms [… read more about Ellerdale, Twazzup,, Glue, Copia, Scribd, Google Wave, and Ushahidi].   

Source: Damien Basile on flickr


Given the shifting consumer and technology landscapes, traditional practices are costly and non-agile mechanisms for creating and sustaining this type of engagement. Although publishers may not be fluent in Web 2.0 yet, they may have more reason than ever to explore new applications, technological capacities, and vendor services.   

Content sharing and direct-to-user communication tools will increasingly replace outmoded services that fail to connect publishers/brands with their ultimate consumers of content.   

Web 2.0 services, if used effectively, have the capacity to make content the nucleus of an engaged discussion or work process and to foster two-way communication with key constituents. Publishers who can effectively incorporate Web 2.0 in their programs may be able to reduce their reliance on intermediaries–and will stand apart by harnessing the power of their consumer audiences.

The Digital Universe, Information Shadows, and Paying for Privacy

In Innovation, Privacy, Technology on May 17, 2010 at 2:08 pm

"The Shadow Knows" by GregStruction on Google Images


Read my complete post on The Scholarly Kitchen. Excerpt:  

Everywhere we turn, we encounter debates over the risks and legality of uses of “private” data by social media mega-businesses like Facebook and Twitter.  

Google is the latest culprit to be caught in the spotlight.  

The lead technology piece in Saturday’s New York Times zeroed in on Google’s violation of German privacy laws, in connection with the company’s admission that it had systematically harvested private data from households in Europe and the US since 2006 — including email content and websites visited — in the course of capturing drive-by images for Google’s Street View photo archive.  

There are already books to teach Internet privacy “survival skills” and software downloads to “erase” your data  footprint. It won’t be surprising to find that some are willing to pay generously for services that sanitize their information shadows with virtual lye and steel wool. Privacy will be a scare commodity, and its market value will rise. When privacy becomes monetized, we may assign relative values to our own private information according to the type of information that is protected or made available.  

While papers have touched on the potentially inverse relationship that exists between user privacy and the efficacy of Web 2.0 social ranking and recommendation engines, social media engines are only the beginning of what is to come …  

Read more.

Can New XML Technologies and the Semantic Web Deliver on Their Promises?

In Innovation, Linked Data, Technology on May 10, 2010 at 3:04 pm

Source: Petr Kratochovil


Read my complete post on The Scholarly Kitchen. Excerpt:           

There is active debate on the Web about the potential for Web 3.0 technologies and the standards that will be adopted to support them. Writing for O’Reilly Community, Kurt Cagle has remarked:           

My central problem with RDF is that it is a brilliant technology that tried to solve too big a problem too early on by establishing itself as a way of building “dynamic” ontologies. Most ontologies are ultimately dynamic, changing and shifting as the requirements for their use change, but at the same time such ontologies change relatively slowly over time.           

As of January 2009, when Cagle wrote this, RDF had failed to garner widespread support from the Web community — but it has gained significant traction during the past year, including incorporation in the Drupal 7 Core.             

The promise within this alphabet soup of technologies is that semantic Web standards will support the development of utilities that:           

  • Provide access to large repositories of information that would otherwise be unwieldy to search quickly
  • Surface relationships within complex data sets that would otherwise be obscured
  • Are highly transferable
  • Deliver democratized access to research information

But there are risks. Building sites that depend on semantic technologies and RDF XML can take longer and be more costly initially. In a stalled economy, long-term financial vision is harder to come by, but those with it may truly leapfrog. In addition, there are concerns about accuracy, authority, and security within these systems, ones the architects must address in order for them to reach the mainstream.         

… [O]ne may wonder whether this is an all-or-nothing proposition. Without speed and consistent delivery of reliable results, projects such as these may fail to meet user expectations and be dead in the water. On the flip side, if RDF XML and its successors can accomplish what they purport to, they will drive significant advances in research by providing the capacity to dynamically derive rich meaning from relationships as well as content.

A Future of Touch and Gestures: New Interfaces Driving Scientific Information Presentation

In Innovation, Technology on May 5, 2010 at 8:49 pm

Read my complete post on The Scholarly Kitchen. Excerpts:     

The variety of app-delivered games and tools currently available offers a representative taste of current capabilities in graphics manipulation and uses for interactively received inputs. The true potential for multitouch technology is still in its nascent stage.   

Displax and Archimedes Solutions are two companies seeking to seize the opportunities offered in this emerging area.   

Source: by liquene on Flickr


As these technologies continue to improve, they will significantly alter the ways we work with and experience information, including images and data. We will increasingly transition from environments governed by the restrictions of mice and keyboards to more fluid and interactive environments — in the vein of Wii, iPad, and iPhone — that support a more fluid, intuitive, and experiential exploration of scientific and non-scientific content and media.   

While timelines are uncertain, expect that consumers of our information will include traditionalists/linear thinkers and visual/experiential thinkers, all of whom will increasingly require that we meet them “where they are” by providing a suite of mechanisms for interacting with content of various types.