No public Twitter messages.

Facebook
RSS

OSS: A SWOT Analysis

Apr - 23 - 2010
Nicole C. Engard

Our closing keynote was Eric Lease Morgan – I’ve read a ton of his articles, but have never seen him in person.

Eric Lease Morgan

Eric started with a history of him. He has been ‘kinda sorta’ writing code since 1996 1976. While driving taxi after college, he discovered his first ‘itch.’ He wanted to know how much he was earning, so he wrote a computer program that gave him all kinds of crazy stats about how much he was making – like how much per mile. He also like astronomy and had an application for his calculator that let you find out where the moon was. He thought, this should be on a computer – and so he wrote a computer program that did the same thing. Eventually he even wrote an online catalog – he made it so that he could hand out a disk and people would know everything in his library.

Which brings us to open source. Open source is about community – if there is no community there is no software – there is no support! (There are some people that seem to forget this very important fact). Speaking of support, this is the biggest challenge to open source software.

Next up the OSS SWOT Analysis

Strengths:

  • It benefits from the numbers game – chances are there is somebody out there with your particular interests. The internet makes that happen.
  • There are plenty of choices – many people are trying to scratch an itch.

Weaknesses:

  • Support is its biggest weakness.
  • OSS requires specialized skills – not necessarily programmers – but usually a systems administrator type of person to configure the application.
  • Institutions change slowly – change takes time and it often makes people nervous.

Opportunities:

  • Very low barrier to entry – computer hardware is cheap and the software is ‘free’
  • Only limited by one’s time, imagination and ability to think systematically. OSS is like a hunk of unshaped clay.

Threats:

  • Established institutions – the status quo is threatened by OSS – FUD
  • Past experience – the profession’s leadership liken OSS with the ‘homegrown’ systems of yesterday. Perceptions are slow to change.

Eric continued with his ideas on the Next Generation library catalogs. Library catalogs are and have been essentially inventory lists, but given the current environment, the problem to be solved is not find and access but use and understand. Are we about ‘here’s the book’? Is that what we’re about? What can do to take that one step further? We’ve used the computer to automate our process, but let’s use the computer to supplement who we are.

Let’s assume that content is available in digital form – this is increasingly becoming true. So once you have a book what do you do with it? Browse the TOC, check the index, put it under a wobbly chair, write in the book, analyze the content, read it…. You can read the book and get all of this – but we can provide a supplementary way of reading. So assuming our content is digital, we can take it and count the number of times a word appears in a book and then compare that to the number of times that word is mentioned in other books. So, then the book with the word in it more can be assumed to be more relevant than the other. (We’re not getting into numbers and statistics – which Eric likes – and which confuse the heck out of me).

Eric comes to the conclusion that that the availability of digital dull text provides a host of opportunities for libraries that goes beyond find and move towards use – services against text. The root of thees services grows on the ability to count the words in any set of documents.

So the next gen catalog is not just a finding mechanism but a way to understand the source material. We could add an analyze button to the OPAC and have it analyze the text saving the reader’s time by showing them if the book is actually relevant to their needs. You could add to that the ability to see how the word is used in the text. Click on the word, and then you see that word in context. This is possible using Eric’s Concordances tool.

Next, we break down the book into things like number of words instead of pages (like we catalog now) because the number of pages is ambiguous – you have no idea if it’s a long book because you don’t know how many images there or how big the font is. You can also see things like the grade range and the Flesch score (these of course are to be taken with a grain of sand because they don’t always give accurate information based on the individual person). You can see the example that Eric showed us on this record for Walden by Henry David Thoreau

Let’s imagine that this kind of metadata was in our catalog. You can search for short books for an 8th grader that has a very high great ideas (another example that Eric gave us) coefficient.

It is important to mention that Eric is not saying that this is the answer, but supplemental to what we’re already doing with our controlled vocabularies and traditional cataloging.

In the end, we have the power to do this more than Google does because we know our audience. We know our patrons.

Technorati Tags: , ,

2 Responses so far.

  1. Alas, I’ve been writing software since 1976 not 1996. Thanks for the nods!


    ELM

  2. You know I thought that then I thought you said you were playing starcraft – but maybe you said another game :) Will fix.


ATO2014: Using Boots

Robb Hamilton and Greg Sheremeta from Red Hat spoke in ...

ATO2014: Modern Appl

Dwight Merriman from MongoDB was up next to talk to ...

ATO2014: Open Source

Jeffrey Hammond from Forrester Research started this morning with a ...

ATO2014: Open Source

Charlie Reisinger works for Penn Manor school district and was ...

ATO2014: The first F

Remy DeCausemaker aka "RemyD" was up next to talk to ...