Print

Print


Not really that I can see. Since I maintain the API, I maintain the API
responses and I only return what is necessary for display and interaction.

For example, our Service & Location Hours
(http://www.library.jhu.edu/hours.html) are all managed in separate Google
Calendar calendars. The GCal API response for a week's worth of opening and
closing times is around 9k (per calendar). With 10 services/locations, that
would be 100k in JSON alone being sent every time the hours page loads. So,
the Gcal API response is processed on the server side so that I only send
about 600 bytes to the user (per calendar). My API also sits behind varnish
so the responses are cached and served up super quick. I <3 varnish.

We're not currently in the central CMS. It's just a locally hosted Apache
site. But, we have a branch campus (https://www.sais-jhu.edu/library) that
is hosted within their centralized (and rather locked-down) Drupal install
that makes use of some of the same API components (the list of libguides and
databases).

And really, your data backend could be anything. It could be a "dark"
Wordpress install from which you grab ATOM feeds from for content. We use a
mixture of Google Calendar, Google Docs, LibGuides, Wordpress, Twitter, and
locally-generated XML.

-Sean


On 8/14/13 10:37 AM, "Josh Welker" <[log in to unmask]> wrote:

> That's an interesting idea. Do you run into performance issues with the
> abundance of DOM updates with the javascript? Also, how much control do you
> have over the content of library pages on the CMS?
> 
> Josh Welker
> 
> On Aug 14, 2013, at 8:35 AM, Sean Hannan <[log in to unmask]> wrote:
> 
>> You could do something like what I did and run your own data backend and use
>> whatever you need to/have to to display content.
>> 
>> Our website is just static HTML, CSS, and Javascript. Everything
>> dynamic/data-powered is javascript that is pulling from a centralized API
>> (written using grape: http://intridea.github.io/grape/). We can move the
>> website to some cloud provider, into a central IT-managed system, or
>> elsewhere and it won't break.
>> 
>> I originally presented the concept at code4lib 2011 (slides:
>> http://www.slideshare.net/MrDys/lets-get-small-a-microservices-approach-to-l
>> ibrary-websites), but it's in production now.
>> 
>> -Sean
>> 
>> On 8/14/13 9:21 AM, "Joshua Welker" <[log in to unmask]> wrote:
>> 
>>> Does anyone have any suggestions as to where the library should or should
>>> not compromise when it comes to using an institutional CMS rather than a
>>> custom library one? We are going through this process right now. Our web
>>> pages are currently all in static HTML and LibGuides. I am wanting to move
>>> to Drupal, and campus IT wants us to move to their Adobe Contribute
>>> platform. AFAIK, Contribute does not allow for any server-side scripting
>>> and does not have any sort of plugin system, and I am very concerned that
>>> Contribute would harm the library's ability to effectively integrate its
>>> online resources into a single web portal (server-side caching, indexes,
>>> scheduled tasks, etc).
>>> 
>>> I know the answer to this question is "it depends," but I am hoping others
>>> can share the fruits of their experience.
>>> 
>>> Thoughts?
>>> 
>>> Josh Welker
>>> Information Technology Librarian
>>> James C. Kirkpatrick Library
>>> University of Central Missouri
>>> Warrensburg, MO 64093
>>> JCKL 2260
>>> 660.543.8022
>>> 
>>> 
>>> -----Original Message-----
>>> From: Code for Libraries [mailto:[log in to unmask]] On Behalf Of
>>> Jimmy Ghaphery
>>> Sent: Tuesday, August 13, 2013 5:49 PM
>>> To: [log in to unmask]
>>> Subject: Re: [CODE4LIB] LibGuides: I don't get it
>>> 
>>> I have followed this thread with great interest. In 2011 Erin White and I
>>> researched many of the issues the group has been hitting on, demonstrating
>>> the popularity of LibGuides in ARL libraries, the locus of control outside
>>> of systems' departments, and the state of content policies.[1]
>>> 
>>> Our most challenging statement in the article to the library tech
>>> community (which was watered down a bit in the peer review process) was
>>> "The popularity of LibGuides, at its heart a specialized content
>>> management system, also calls into question the vitality and/or
>>> adaptability of local content management system implementations in
>>> libraries."
>>> 
>>> One of the biggest challenges I see toward creating a non-commercial
>>> alternative is that the library code community is so dispersed in the
>>> various institutions that it makes it difficult to get away from the
>>> download tar.gz model. Are our institutions ready to collaborate across
>>> themselves such that there could be a shared SaaS model (of anything
>>> really) that libraries could subscribe/contribute to? The barriers here
>>> certainly aren't technological, but more along the lines of policy,
>>> governance, etc.
>>> 
>>> As for Research Guides in general, I see a very clear divide in the
>>> public/tech communities not only on platform but more philosophical. From
>>> the tech side once it is all boiled down, heck why do you even need a
>>> third party system; catalog the databases with some type of local genres
>>> and push out an api/xml feeds to various disciplines. From the public side
>>> there is a long lineage of individually curated guides that goes to the
>>> core of value of professionally knowing one's community and serving it.
>>> 
>>> [1] https://ejournals.bc.edu/ojs/index.php/ital/article/view/1830
>>> 
>>> best,
>>> 
>>> Jimmy
>>> 
>>> 
>>> 
>>> On Tue, Aug 13, 2013 at 11:13 AM, Galen Charlton <[log in to unmask]>
>>> wrote:
>>> 
>>>> Hi,
>>>> 
>>>> On Tue, Aug 13, 2013 at 6:53 AM, Wilhelmina Randtke <[log in to unmask]
>>>>> wrote:
>>>> 
>>>>> There's not a lock-in issue with LibGuides, because it's used to
>>>>> host pathfinders.  Those are supposed to be periodically revisited.
>>>>> One of
>>>> the
>>>>> big problems is that librarians will start a guide and never finish,
>>>>> or make one then never maintain it.  Periodically deleting
>>>>> everything is a good thing for pathfinders and subject guides, and
>>>>> people should do it anyway.  No one's talking about tools for
>>>>> digital archives, which have
>>>> lock
>>>>> in issues and are way more expensive.
>>>>> 
>>>> 
>>>> Lock-in doesn't have to be absolute to be effective, it just has to
>>>> has raise the bar sufficiently high to make users think twice about
>>>> migrating away.
>>>> 
>>>> This applies even if the data to be moved is transitory and constantly
>>>> changing.   For example, if a library has been diligently updating their
>>>> pathfinders, but wants to switch platforms, if there were no way to
>>>> export them to load into the successor system, the effort of redoing
>>>> them or doing a lot of copy-and-pasting could be prohibitive.
>>>> 
>>>> As a general statement -- and I know that this battle has been
>>>> bitterly fought in the ILS space -- I believe that *all* library
>>>> software services, whether based on F/LOSS software or proprietary
>>>> software, should provide a way for the library to obtain a full dump
>>>> of their data, in an accessible format, at no additional charge.
>>>> 
>>>> I see that LibGuides advertises the ability to make local backups of
>>>> individual pages and also provides (via a paid add-on module) an XML
>>>> export function.  I don't know if SpringShare will also provide free
>>>> one-time exports on request, but I would hope they do.
>>>> 
>>>> Of course, even if one has the data in hand, data migrations can still
>>>> take a lot of time, effort, and expertise.
>>>> 
>>>> Regards,
>>>> 
>>>> Galen
>>>> --
>>>> Galen Charlton
>>>> Manager of Implementation
>>>> Equinox Software, Inc. / The Open Source Experts
>>>> email:  [log in to unmask]
>>>> direct: +1 770-709-5581
>>>> cell:   +1 404-984-4366
>>>> skype:  gmcharlt
>>>> web:    http://www.esilibrary.com/
>>>> Supporting Koha and Evergreen: http://koha-community.org &
>>>> http://evergreen-ils.org
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Jimmy Ghaphery
>>> Head, Digital Technologies
>>> VCU Libraries
>>> 804-827-3551