NFAIS 2009: Digital Natives and Professional Searching: Improving the User Experience

The third panel for today was titled Digital Natives and Professional Searching: Improving the User Experience.

Chris Lamb from Thomson Reuters took the podium first with his talk entitled Desperately Seeking Paris in which he talked about Calais a free web service and open API (there are already plugins for Drupal, OpenOffice, WordPress and others) that makes obvious to computers what is obvious to humans. The difference when searching for ‘Paris’ between ‘Paris, France'; ‘Paris, Texas'; and ‘Paris Hilton’. Basically it reads the metadata to distinguish between results.

It takes unstructured documents (text, HTML, XML) and extracts named entities, facts, events, and categories from unstructured text and makes connections between entities in your content and related data in DBpedia, GeoNames, CIA World Factbook and more. In short, it’s a realization of the semantic web.

Some sample extraction applications for Calais include

  • Indexing and Abstracting (sorting and collating)
  • Investigative Reporting (tag background documents and reveal hidden connections)
  • Media Monitoring (competitive intelligence and blogosphere monitoring)
  • Online publishing

How are people using Calais in search? Calais is a platform, not an application – and so it’s not a search engine. People are using Calais to supplement indexing engines like FAST. Once the data is returned it has semantically enhanced content allowing the index engine to support semantic enabled results and links. There are people working on projects like this – but they have yet to be released or even announced, so there is no way to see it in action online yet.

That said, there are over 9,000 developers using it already and there are over 1 million daily submissions – so if you want to play with Calais you won’t be a guinea pig.

This talk made me curious enough to check out the plugins for apps that I already use to see what kind of value it will add.

Rudy Potenzone from Microsoft came next with his talk, And the Barbarians have Phasers: Authors and Their Tools Come of Age.

This is the most information aware group (the digital natives) that has ever come into our offices, our libraries, our universities and they come equipped – already knowing how to use Twitter and blogs and they expect these tools when then come into the workforce – how do we deal with this and prepare for them?

Microsoft is envisioning a new era of research reporting. The author of today is the reader of tomorrow – so how do we capture enough information to make the content interesting to the reader. Office 2007 and Sharepoint are the ways they’re opening up to these new ideals – Sharepoint is the most popular product that Microsoft has – with the highest sales of any product they have. This is a sign of the times – how people want to work in their offices.

Rudy talked about Microsoft’s efforts to bring their tools up to the expectations of today’s academic environment. There are a lot of projects going on to try and bring these ideals to life. One example is The British Library’s Research Information Centre (RIC) and another is an eJournal Publishing Service. All of these examples are built on Microsoft products and can be found online here. You can also find code online at codeplex.com.

One tool that sounds neat to me was the author add-In tool that lets you get the rules for the publication you’re writing for and add your own metadata as you’re writing the article in Microsoft Word. My only problem is that while these add on tools are open source and free to download – you still need the Microsoft software to use them – which I do – but you get my point :)

Kristian Hammond finished up with his presentation Frictionless Information: Adding Value in the Age of Google.

Coming for the IT world, he’s trying to understand our world – the world of publishers and content providers. He thinks we create really high value, fabulous content – and that it used to be that people were willing to pay for that – but they’re not anymore :) He listed our pressing problems as:

  • Google
  • Social Media
  • Content, content, content
  • Bounce (somebody finds something on Google that leads them to you site and they bounce on and then bounce off – never to return again)
  • Free

His department decided that the way around these problems is focussing only on the user. Giving the user what they want, when they want it, without taking them away from what they’re doing. He doesn’t care what the source is (Blogs, Services, Web, News, Opinion, Video, etc etc). Nothing is going to stop people from using web resources – so let’s embrace it – bring together the high class content and mix it with the other content – provide it all to the user.

His solution to this is the Relevance Engine. While he’s writing it’s reading, while he’s reading it’s reading – it’s building a gist – what it thinks this document is about and give additional information about it from anywhere on the web – all sources. Because it’s reading the context of the document you’re working on it’s going to find better results than if you typed in a query. This can be done both on the desktop and on the web – from a piece of indexed text we can find anything! The better the indexing the better the results.

It all comes down to loving your user, protecting your user from the horror of the text box!! Providing your user with your amazing content so that they never have to go looking for it elsewhere.

Wow, what an animated speaker – I wish he was one of my professors when I was in school :) I bet that class is so much fun!!

Technorati Tags: ,

NFAIS 2009: Engaging the Global Digital Native: Transforming Technologies

The second panel of the day was titled Engaging the Global Digital Native: Transforming Technologies.

Daniel Albohn of Sony was up first and titled his talk Trends and Developments in e-Reading and Digital Content : The advent of electronic paper displays.

He started by talking to us about the technologies behind e-book readers (Sony Reader and the Amazon Kindle). Their first device was launched in 2004 in Japan, but the US consumers were genuinely interested in the product. He showed us the history of these devices and where they’re going with. On a sad note (personally) color probably isn’t coming to e-ink until 2010 – and even then it will be pastels – not deep colors like we’re used to in our textbooks.

The current user-base are public transit users and the techies/gadget lovers. It is now becoming a tool for publishing professionals because of the ‘green effect.’ That’s what I would think would be the biggest draw – I would love to use a digital reader instead of printing out PDFs. I was always told that I couldn’t read PDFs on the Sony Reader – but Daniel told me that PDFs are readable on the Reader. The problem is that when PDFs get heavy on charts and graphics – they don’t render that well, but text heavy PDFs work great.

I guess I have to go back to restart my research and see if I want the Kindle or the Reader for my journal reading :) (it may come down to which has software that works on a Mac).

Salim Roukos with IBM Research Worldwide was next with his talk entitled Real-Time Translation Services.

In 2006 English was the top language used online, but this past summer, Chinese surpassed that, making Chinese the most used language online. Content online is going to become more and more diverse, making it harder to research if you don’t understand multiple languages.

Salim talked to us about three types of translations:

  • Text-to-Text
  • Speech-to-Text
  • Speech-To-Speech

For Text-To-Text, he showed us n.Fluent Translation which has a 90% translation accuracy. He showed us the translation from Arabic to English on the BBC’s website and it was pretty darn good. People can then make corrections and ‘teach’ the machine how to translate properly. He also showed us a Multi-lingual chat they have at IBM. Each user chats in their own language, but it appears for the other person in their language. I tried to find a live demo of this – but wasn’t able to (and had limited Internet access) – so if you know of a way to test this awesome tool, just let me know and I’ll put a link in.

For Speech-To-Text he showed us Tales which is a service that takes news from several channels worldwide and puts the close-captioning immediately – and it’s even better than the close-captioning I’ve seen on US news show.

He then showed us and example of Speech-To-Speech, which was a device they are using in Iraq to let soldiers talk to the locals. The example he gave us was from the device used during the Olympics in China – I have no idea how accurate it was (cause I don’t speak Chinese) but it was pretty darn cool to watch it in action.

Zsolt Silberer from Wolters Kluwer ended this panel with his talk Transforming Technologies – A publishers perspective.

Zsolt started by reminding us that we’re in the midst of unprecedented change and he feels (unlike other’s I’ve heard in the audience here) that it’s a great time to be a publisher because the power of the content curator is valuable online.

He talked about his 4-year-old and how she’s used to instant gratification, seeing the picture on the digital camera right after it’s taken. She also wants to be able to do multiple things at once. While that seems intimidating, Zsolt says that it’s just evolution and we shouldn’t be scared of it.

One way to go with the flow (my words) is to start to understand your customers better. Students aren’t reading textbooks much anymore, they’re learning off of PowerPoint slides put up by their professors and teachers, they’re learning with games and they’re learning by taking notes in class. What can we do to appeal to these students with the new ways they’re learning?

Another example is the prevalence of mobile devices. Doctors have their mobile devices with them while at the patient’s bedside, so it would be best if they can research on those devices. Right now they can search on several sites using mobile devices, but in the end it leads them to a PDF and now they’re stuck … Publishers have to make their content available where ever customers and users want it -and they have to open up your systems to enable experimentation and remixing by developers.

In short, we’re getting there, but we’re not there yet, and it’s time to really focus on getting our content out there onto these devices.

Technorati Tags: ,

NFAIS 2009: Information Services for the Born Digital Generation

First this morning was the Information Services for the Born Digital Generation Panel. Up first was Daviess Menefee from Elsevier with his talk titled 2Collab – The Research Collaboration Tool.

Why did they develop it?

It was basically to provide information and understanding about the new types of content that users were creating. users are creating their own generated types of content. customers expectations are starting to change and we wanted to try to understand these changes and what their impact might be on scholarly communication. They also wanted to build and consolidate virtual scientific communities – and hopefully shorted research cycles by increasing productive of research by facilitation exchange of knowledge

Advantages to scientists:

  • discover new research material

  • share and identify quality information
  • avoid ‘blind alleys’ through communication with peers
  • collaborate without email
  • mine collective wisdom of experts
  • stay current with what others are saying in your field
  • hold discussions either in private groups or openly with the wider scientific community

They have integrated 2Collab into ScienceDirect and Scopus so that you can click an Add button while researching to send their article to 2Collab – works like my Delicious Firefox plugin. There is also a browser toolbar you can download and install to grab data from other resources that are not Elsevier owned.

To address the issue of privacy, 2Collab offers 3 types of privacy. There are the open public groups (visible to anyone on the web), a closed public group (a group for only people who are approved by an admin), and lastly a completely closed group (everyone has to be invited in order to see the content).

They did a survey to see how scientists are using social tools in May of 2008. Major age group who responded (it was a self selecting survey) were the under 25-45s and they were mostly researchers and research associates. Of those who answered, 65% said they used social networking for their work and 35% for leisure. Of those the main reasons they used social networking were to collaborate on research and keep up with what’s new in the field. In 5 years, over 50% of respondents see social applications playing a key role in shaping nearly all aspects of research workflow.

So these results showed them that they were right in creating 2Collab. 2Collab positions Elsevier to take advantage of the rise of social networking among young researchers in the scientific fields. My favorite closing point was that this was a learning experience for those at Elsevier!! Awesome – keep learning!!

John Law was up next with his talk titled Accessibility of Scholarly Resources.

John talked to us about his observational research of student researchers – he called it going native – watching students in their own environment. In addition, they did online chat sessions, conventional focus groups with librarians and surveys.

John decided to give us information on Aaron (not a persona – a real person) a 3rd year undergraduate student. He has a term paper where he has to evaluate how the alternative press evaluated George Bush during the Katrina disaster. Aaron starts his search in the OPAC – and can’t find anything with his simple search of ‘media and government.’ He then tries browsing by subjects, but can’t find the right subject for his research. His searching leads him out of the library site – which by the way had a great database for his research, but he didn’t know that the library had it. Once he’s out of the library website he spends the rest of his research out on the open web. In the end he had almost 0% success with this searching technique.

So, they asked the students what the superior source for quality, credible content – the library was by far the answer given (versus web search). Then – why do researchers so often find themselves out on the open web? John’s answer for this is that it’s compensatory behavior. The library site lead them out to the web and that leads them to a single search box that is easy to use and so they start searching there. In most cases, students don’t even realize that they’re engaging in compensatory behavior.

So, what are the ways to find resources in libraries?

  • library catalog – but this usually has the physical collection and not the electronic – of which there are usually more

  • eResources page – lists all of the electronic content, but there are hundreds of resources listed and so there is no way they’re going to read through the entire list to find what they need
  • Federated Search – which isn’t up to snuff yet

So in the end people end up at Google because it’s simple, easy and fast – they don’t have to know anything about the content they’re searching. Unlike Internet searching, in the library you have to know what resources are indexed where before searching.

John then reviews Google as a research tool. If you do a refined search on Google you still get hundreds of thousands of results which you have to sort through.

“Too many results from a Google search and the need to sort through them” and “Figuring out what is a credible source, and what is not” project information literacy report: what today’s college students say… Feb 2009 www.projectinfolit.org

What have we learned from this? Don’t fear Google, embrace it, but also don’t rely on it as the primary means for your library clients to access your subscription content. What we need is simplified access to library content. Librarians realize this and view it as a challenge to the library.

What is Serials Solutions doing about this? They have developed Summon a Unified Discovery Service – a Google-like search for your library. It enables quick discovery of the most credible resource anywhere the library has them. Unlike federated search, the results come back immediately (quickly) and they don’t just get titles, but they get access to the metadata to winnow down their results to exactly what they’re looking for.

In the service so far they have 70+ providers supplying content, 50,000+ journals and over 300,000,000 records.

So, what are publishers to do? John says we should listen to Inger and Gardner (Sept 2008) who say that “A key measure of publisher success is the usage which can be maximized by enabling all the routes to its content … library technology plays a key role in user navigation.”

Ann Thorton was up last with her talk entitled Equipping and Empowering Staff to “Get Out There”.

Ann asks, how do we ensure that our staff are ready to play in this new playing field? At the NYPL, visitors, circulation, and website views are all up. While these are all up, there is still a downward trend in the use of 2 of the libraries assets: unique collections and staff expertise.

How do we best leverage what we consider to be our greatest asset (these unique collections and our staff expertise)? We need to create a new digital experience, we need to set content free, and we need to get staff out there to engage with patrons in the spaces they’re occupying. They’re starting to use Drupal at the NYPL to help with some of this. They are also using some third party sites to put their content up because their patrons are already there.

So, how are they out there? They’re on Facebook, YouTube, iTunes University, Flickr Commons and Twitter (just launched 2 weeks ago). Ann showed us the library’s YouTube channel where they are the 66th most viewed non profit this month and have a great collection of videos that feature the library’s collections and staff. They have also hired new employees to help set content free and train the staff.

In addition to these tools, they have established a blogging policy for their staff and these new blog posts are popping up in Google search results – leading people to the library. A popular misconception is that this blogging at the library is being done by digital natives – but in fact it’s all of the staff from all generations – it’s just a way for the librarians to share their research passions with the patrons and the world.

She showed us their Flickr Commons page and it was amazing. They had uploaded content and got a great response including comments, tags, new metadata and even a mashup of their pictures mapped on a Google Map – showing a now and then view of these locations.

A good closing comment – having the technology is not enough – they need the staff and the knowledge to use it.

Technorati Tags: ,

NFAIS 2009: Digital Natives and Traditional Information Resources

The last talk for day one of NFAIS Annual was titled Digital Natives and Traditional Information Resources. We had the pleasure of hearing from 3 digital natives about their expectations of and experiences with digital media.

First up was Carrie Newman, her talk was titled Perspectives of a Digital Native Librarian. She started by giving us proof that she’s a digital native – she bought John’s book (Born Digital) on Amazon, read about 10 pages and then sold it on Amazon. She then listed the tools she uses online:

  • Professional tools: Google, Wikipedia, WorldCat, Amazon, PubMed, engineering village, university library catalog
    • (research and collection development)
  • Personal research: Google scholar (first place I turn – because the library databases are not so good – the indexing is poor and the coverage isn’t that great – and they’re slow — so if I’m going to use something that isn’t that great it might as well be fast – Google), then some social science databases (out of guilt), citation mining, professional journals and talking to colleagues in person
  • Tools for collaboration: Google Docs, Delicious, Wikis, Skype, IM, Staff blog

Traditional Tools Versus New Tools

  • Traditional = slow/clunky, old, hard to use – but since she’s a librarian she knows how to use them and so does use them
    • best used for defined and complex research questions
  • New tools = chaotic poorly organized – but they’re fast so you’re willing to sit and sift through results

Given that, Carrie (and her patrons) uses new tools to narrow search results down and find keywords – then goes into traditional tools to get valuable resources.

Carrie gave us a definition of an ideal professional information resource:

  • excellent indexing – promote browsing (most tools that have good indexing – don’t have good browsing)
    • it turned out that she meant metadata management more than indexing
  • Many refine options – the ability to shrink your search down after you search a broad topic
  • Fast and easy to use – she said “that if it’s not fast, I’ll get bored and go use something else”
  • Smart – like Amazon – where it will auto recommend things to me (nice to have the ability to tag – but not necessarily see other’s tags)
    • an audience member brought up an interesting point about this after the face. While I agree that it would be nice to see things like this – the audience member said that she worried that this would lead to all students having the same resources in their papers instead of letting them do the research and come up with their own choices and opinions. I don’t know what the right answer is here …
  • Integrates seamlessly with bibliographic managers (Refworks, Endnote, Papers)
  • Programmable and automatable (email you new results – or RSS feed)
  • Broad coverage all in one place

Next up was Sabrina Manville from Ithaka with her talk titled From Campus to Cubicle.

Sabrina worked on a study to see how students were using JSTOR (assumptions same as others we’ve heard – “they use the internet for everything, what instant gratification, value social networking and other virtual communication).

Here’s what they heard from students:

  • want to find sources that their professors will accept “won’t laugh at”
  • east of use, convenience
  • plagiarism is a big concern – citing sources reassures professors that they did the actual research

How did students do their research

  • search engine are key (didn’t know about google scholar and so they were just using google)
  • when searching students move from broad to narrow
  • readability and speed are important (google is mindless – whereas it takes a lot of effort to get on JSTOR)
  • they did hear alot about quality and what it is

The good thing they found was that students knew what to look for for quality information

  • .edu and .gov domains
  • what it cites
  • writing style and grammatical correctness
  • aesthetic element was of great importance – old or out-dated websites are looked down upon – ads are very unwelcome as are other distractions when doing academic work (it’s possible they didn’t know that google text ads and the text ads on facebook are ads – or maybe they have blockers in place like i do in my browser)

Issues that undergraduates said they had with doing research on JSTOR

  • searching is a challenge – students said search results were hard to penetrate; they are eager for tools which will help them narrow the results further

When asked if they would they want web 2.0 functionality in their resources? The answer was yes – but not 100%.

  • they liked the concept of MyJSTOR
  • didn’t want to find other people doing the same research
  • facebook is for my friends and i don’t want that in my research
  • links are highly appreciated – between resources
  • suggested JSTOR develop
    • tagging
    • ranking based on usage
    • user reviews and articles
    • article suggestions like from (netflix and amazon)

Sabrina then gave us some insights into her professional experiences (the cubicle part of her title).

She wanted to start with some disclaimers. While she is a digital native, when she was in college (pre-2006)

  • Google was only search – no email or docs
  • Facebook was only college students
  • most friends didn’t read blogs
  • no iPhone or mobile web
  • no Twitter

When researching she looks for more current resources, so having tools that let her search current information are important. She doesn’t care whether and article is peer reviewed or not, as long as it provides valuable information (I’m with her on this). And like Carrie, she starts with Google and then moves on to the more specific resources.

The fact is that commercial sites have influenced our exceptions – Google, Amazon, Netflix, etc.

So, what can we do to improve the user experience?

  • many of these traits can be implemented fairly easily in traditional academic resources
  • provide better context for content
  • continue to increase scale and comprehensiveness, take advantage of user data
  • improving usability is a huge leap forward in itself!

Last up was Jason Hoyt a student at Stanford University with his talk titled The future of scholarly search, communication, sharing, databases.

He started by giving us an animated story about his research experiences. I say animated because he used some really fun slides. He found that he was being inefficient in his research using search, communication, sharing, databases. He would talk to people, going online look for information, go back to talking to people – it was wasting too much time. And so, Jason’s call is for collective intelligence – he wants these 4 things (search, communication, sharing and databases) to talk to each other and then talk to us and give us the information we’re looking for.

One example of a site that does this well is kayak.com – in a single search the best price, travel time, departure, or arrival can be prioritized across multiple databases – (mashup of multiple databases) this is collective intelligence — using Kayak’s API you can pull out even more information to remix.

He showed us some graphs that displayed how hard it is to keep up with our areas of research. In the 50s the researcher could do their own research and keep up with the learning curve, but today we can’t do that – there is way too much information to keep up with it all. Jason listed a series of applications and put them into either the ‘traditional tools’ category or the ‘new tools’ category (I couldn’t keep up with all of them – so hopefully he puts his slides online for us all to see.

He did an informal study and asked people to rate the importance of a series of tools in their day-to-day work:

  • Google, PubMed, open access were the top three – most important (equal for all ages)
  • the under 30 crowd thought social networks were more important than the older researchers

So, why so people use social networks?

  • networking outside of the lab – find jobs, new idea, and form collaborations
  • for the most part people thought it wasn’t mainstream enough and that conferences were a better way to make new connections

When asked “what do you think would benefit the world community of researchers more, open access or improved meta search?” 70% said open access.

Jason (and I agree) thinks that the key to success is to build something to integrate traditionally individual talks with a crowd (collective intelligence). The traditional players need to work towards new business models that can sustain open access (like PLoS). In the meantime we need to provide better APIs and XML formats for machine readable searches (OTMI – open text mining interface). And lastly, continue hosting these kinds of conferences so that we can all talk about what we do to improve our search experiences.

After these three were done with their talks, we were able to ask the panel questions – to me it seems like throwing the lambs to the wolves. I’ve been in that seat before and I have to say that the traditional publishers and vendors are very scared of words like ‘open access.’ One audience member asked how they could sustain their businesses if they gave their content away for free – I don’t have the answer – but I know it’s possible because others are doing it. Another person said that she wasn’t hearing anything new from these speakers – except that they were introducing the 3 Fs – Free, Facebook, Fast – she wanted to know what we (the digital natives) were doing differently from her generation in terms of research. The panel wasn’t sure how to answer her, so I did. I told her that she’s right, we’re not doing anything differently – mainly because the tools aren’t there for us to do anything differently – they’re the same tools she learned on and used. The point of the talk was to show people what we want in our research tools – not what we have right now.

Overall, a great first day – and now it’s time to head out to day two!

Technorati Tags: ,

NFAIS 2009: Google Generation

Ian Rowlands presented a talk titled, The Information Behavior of Researchers of the Future: Survey Results. Ian started by saying that he was happy that John was optimistic before him – because it lets him be a bit more anxious and less complacent about where we’re going on our own digital journey. He talked about a study he did with JISC and the British Library to discuss what happens in 2017 – when the first of the digital native generation start to hit the big research libraries like the British Library?

That said, it was very hard to do this study because in 2017 technologies will be different and we’ll be different. One of the questions that rose from this fact was is there a real difference between the Google Generation and earlier generations at the same point in their development? When we talk about young people we have to think back to when we were young and what values we had and the ways we used libraries and the skills that we had.

This is one of the reasons that it is very dangerous to stereotype an entire generation. He pointed out a study by Synovate in 2007 that found that only 27% of young people live up to images of total IT immersion – for most (57%) ‘technology was not a badge to be worn, but something that had value’ once its functional usefulness has been demonstrated – 20% are ‘digital dissidents.’ And another study by Ofcom (in the UK) that same year found that over-65s spend 4 hours a week longer online than 18-24s. Another stereo type was that the “old people” use manuals and “young people” just play until it works, but an Ofcom study in 2006 showed that it’s actually the opposite, young people will read the manual and older people will get annoyed with the manual and just try to use the device without reading.

All that said, he’s not saying that the Google Generation conception isn’t important – he’s saying that we’re all partially Google Generationalized. We’re making a big assumption based on when someone is born – why is it that libraries and information professionals spend so little time doing what other industries do – studies on actual use – they spend lots of money on user research – so why don’t we? One of his slides had a great quote “stereotype means to cast a person in a preset mold – to deny them individuality”

Research over 15 years shows

  • young people have difficulties formulating appropriate terms due to the use of natural language (how to build bird’s nests)
  • assume search engines understand sentences and questions
  • do not use advanced search facilities or navigation aids
  • have trouble generating alternative search terms / synonyms
  • often repeat the same search several times

Other issues

  • speed of young people’s web searching shows little time is spent in evaluating information
  • information seeking stops at the point where articles were simply found rather than perused
  • little regard is made to the text itself – only the presence/absence of words exactly matching search terms or a word in the title
  • an appropriate accompanying image also enough to confirm relevance

These problems have always been around – and many of them are not unique to young people! Once again, he reminded us, ‘We are all ‘Google Generation!’ and that we’re only 15 years into the digital library proper – and we’ve had over 5 millenia to deal with how to handle the hard copy and printed materials … in short, we’re still learning!

I do agree that we need to do some actual studies instead of just assuming that we know what our users want and that all “young people” think and research the same way.

Very interesting talk and some great points were made!

Technorati Tags: ,

NFAIS 2009: Keynote: Born Digital

John Palfrey, co-author to Born Digital: Understanding the First Generation of Digital Natives, was our keynote speaker.

First, what a great talk! I know that this book is on a lot of our wishlists or bookshelves, but nothing gets me more excited about a book than hearing the author speak. John started by telling us that the book studies and unpacks the myths about the digital natives

He started out by giving us some of the details of the study:

  • a wrong perception that most people seem to have is that all young people use technology the same way – and adults don’t use it at all
    • there are of course a group of people who do use these tools the same way – but that doesn’t mean that everyone does.
  • He/the study defines digital natives are : Born after 1980 – have access to technologies (so this does not include young people in poorer countries) – have the skills and know how to use these technologies in enhanced ways

He then explained the Digital Landscape for us by asking what does it mean to have this constant connectivity – a phone, PDA, laptop – something always on us.  He talked about how digital natives have merged their real identity with their digital identity – there is just one identity – it’s a sense that it’s just as important what you put online as what you put on to wear in the morning – they live in a converged environment.  According to the guidelines for this study – I’m not a Digital Native – but I do see my digital identity this way.  For me, I work in a virtual office and so I only have my digital identity to go on in my professional life – until someone invites me to speak to their group in person.

Next, multi-tasking – “it’s not a distraction it’s interaction”.  John spoke of his students in a class at Harvard Law School – everyone is looking into their laptop – it’s hard as a professor to see everyone paying attention to their laptop instead of the professor.  The discomfort aside, according to some studies this is not actually good for education.  That said, some multi-tasking is task-switching – which is two tasks at once switching back and forth rapidly – which is actually good for productivity and education.

He also found that young people have a presumption that everything they deal with will come in digital format.  He told the story of his 3 year old asking where you see the picture on a digital camera – something he and his wife bought on a trip because they forgot their digital camera.  To further this point, YouTube is the number 2 search engine in the world – behind Google

John continues that that’s just part of the story – in addition to presuming it’s digital, they presume that it’s social – they’re not just blogging for themselves – they’re not taking pictures for themselves, but to upload them to Flickr or Facebook to share with everyone. He gave a great example of this from his organization.  They asked on the web for a logo and got over 170 entries without offering a prize – people just wanted to participate.

In addition to being users of digital technologies, digital natives are very often the entrepreneurs and creators of these technologies – Facebook for example.  Because of this, there is a very rapid feedback loop between the developers, the consumers and the business people (see the most recent Facebook policy change suggestion – and then withdrawal).

The last issue he addressed regarding the landscape was that people are very good at working collaboratively – across geographic and virtual boundaries (working with Google Office etc) and we under-leverage this as teachers.

Addressing Perceived Threats

  • Security
    • stranger danger, bullying, hacking (bullying is probably the biggest threat)
  • Privacy
    • young people share too much about themselves online – this is not a myth
    • they have a sense of security that is probably false
  • Intellectual property
    • young people don’t pay for their music – not a myth he can bust – it is the case for 9 out of 10 people they asked
    • they know that it is illegal, but that it’s still okay to do – they’re sticking it to the man
    • rights to remix/reuse – not surprisingly they had no idea what the laws were on this issue
    • copyright 200 years ago was the domain of publishers – now it’s relevant to everybody
    • He showed us The Ballad of Zach McCune
  • Credibility of information
    • misinformation, cheating, hidden influences
    • the threat of Wikipedia
    • when asked where they went for information – it was always “I would go to Google, look at the top 10 results and find the Wikipedia article and go to it”
    • there were some kids who thought that it was crap – that their classmates were there before them and changed the page to screw them up
    • kids don’t read the newspaper – or watch the news – we know that they get information from lots of different sources – and are they able to sort through them and filter properly?

John ended with a positive outlook for each of the points above:

  • in the context of safety and security – kids are learning new media literacy skills
  • intellectual property issue – actually leads to creativity in kids-  they’re creating new things
  • kids are participating in a global knowledge creating  endeavor – a lot of cross culture learning
  • kids have access to more information than ever before

It’s hard to see how we go from here – the time of great disruption to where we have this amazing resource – but he thinks that young people can help us.  Check out DigitalNative.org to learn more or watch move videos related to the book on YouTube.

Technorati Tags: ,

NFAIS 2009: Conference Prep

The title of this year’s NFAIS Annual Conference is Barbarians at the Gate? The Impact of Digital Natives and Emerging Technologies on the Future of Information Services.

So, when I arrived at the conference early, I headed across the street for lunch at Borders (it’s raining and it was closer than the library) and picked up a copy of Grown up Digital to skim before the conference started. I read the introduction and a bit of the chapters on revamping our education system to meet the needs of the digital natives – and I found it very interesting! The author mentions that our current educational systems were built around the industrial worker – who was expected to listen to his superiors and just do his job. In that model – the one we all experienced as students, the teachers lecture and the students write down what they’re told to regurgitate on the test.

Last week I saw a piece on the news about a school in our area that is having students (young students) attend morning meetings in their classrooms where they talk about themselves a bit, then do a group project of some sort, and generally learn to get along, work together and learn together.

This is the model that seems to be right for the digital natives – a classroom where the students collaborate with each other and learn to work in groups – and have their own opinions. It’s a great model and I hope that more and more schools are following it – because today’s youth are used to having a voice and being allowed to collaborate with those around them – and around the world for that matter.

Well, the conference is about to begin – so it’s time to see if any of the speakers touch on the points found in Grown up Digital.

Technorati Tags: ,

The Future of Bibliographic Control: A Time of Transition

This is an interesting sounding event hosted by NFAIS at PALINET headquarters:

The Internet, search engine technology, and the growth in electronic resources have significantly changed both the publishing and the library environments. And a new, born-digital generation of information seekers is accelerating the pace of change as they embrace technology and integrate it into all aspects of their lives. This evolution from a print to digital information environment is forcing all those involved in bibliographic control for information access and retrieval to rethink traditional practices and procedures – even to rethink the concept of journals and issues! How does digital article-by-article publishing impact library acquisitions and cataloging as well as processing by traditional abstracting and indexing services? How can user-generated be leveraged to enrich bibliographic services? Can librarians and content providers collaborate in the creation and sharing of bibliographic data? What new forms of bibliographic control are emerging? And what opportunities does the future hold for the traditional players in bibliographic control?

The event takes place on the 28th of March and if you register on or before March 8th, you get a discount.

Technorati Tags:

NFAIS – Early Bird Registration Ending Soon

Join NFAIS for its: 50th Anniversary (1958 – 2008)

The cut-off date for the early bird conference registration fee is only days away – Tuesday, January 8, 2008!! The conference, scheduled for February 24-26, 2008 in Philadelphia, PA, is for all information providers – publishers, librarians and educators – who want to learn more about the user behavior and expectations that are driving the new information order and the technologies, business practices and strategies that are required to adapt products and services to a new generation of information seekers.

The Conference theme – The New Information Order: Its Culture, Content and Economy will look at how the rapid adoption of information technology is creating a user-centric, technology-driven society with its own unique culture, value propositions, behavior and economy, and will highlight the opportunities that are available to all who are willing to adapt to the New Order. The preliminary program, registration forms and general information are now available at: http://www.nfais.org/2008_Tier_Program.htm.

Highlights include:

  • Trends that are driving the new information order from noted author David Weinberger, (Everything is Miscellaneous: The Power of the New Digital Disorder ClueTrain Manifesto, etc.)
  • Emerging technologies and the future of information discovery
  • User perceptions of the value of content based upon recent surveys from Outsell, Inc.
  • Corporate and library business practices and revenue models that reflect the culture of today’s information society
  • The geographic shift in the information economy and the opportunities offered by China as a new source of content
  • Strategies for success in the New Information Order form the perspective of corporate, academic and government executives

This 2008 NFAIS Annual Conference will be a very special event as NFAIS will be marking the 50th Anniversary of its founding. The City of Philadelphia will proclaim the opening day, February 24, 2008, as “NFAIS Day,” the Gala celebration will be held in the ballroom of the historic Academy of Music, the oldest grand opera house in the U.S. that is still used for its original purpose, and the meeting itself will be held in the Park Hyatt at the Bellevue, a national historic hotel. Join us and find out how your organization can thrive in the New Information Order!

For more information, contact Jill O’Neill, NFAIS Director of Communication and Planning (jilloneill@nfais.org or 215-893-1561) or visit the NFAIS Web site at http://www.nfais.org/events/event_details.cfm?id=44.

Technorati Tags: , ,