When all else fails, Wikipedia.... http://en.wikipedia.org/wiki/Business_intelligence_tools#Open_source_free_products
RapidMiner and Pentaho Community Editions both look appealing. I hope to try them out soon.
I also found the Ruby ActiveWarehouse and ActiveWarehouse-ETL projects which look pretty cool for Rails projects, but maybe a bit stale.
Jason
Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
[log in to unmask]
913-588-7319
>>> On 9/13/2011 at 04:37 PM, in message <4E6FCD17.CF3 : 5 : 23711>, Jason Stirnaman wrote:
Thanks, Shirley! I remember seeing that before but I'll look more closely now.
I know what I'm describing is also known, typically, as a data warehouse. I guess I'm trying to steer around the usual solutions in that space. We do have an Oracle-driven data warehouse on campus, but the project is in heavy transition right now and we still had to do a fair amount of work ourselves just to get a few data sources into it.
Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
[log in to unmask]
913-588-7319
>>> On 9/13/2011 at 04:25 PM, in message <[log in to unmask]>, Shirley Lincicum <[log in to unmask]> wrote:
Jason,
Check out: http://www.needlebase.com/
It was not developed specifically for libraries, but it supports data
aggregation, analysis, web scraping, and does not require programming
skills to use.
Shirley
Shirley Lincicum
Librarian, Western Oregon University
[log in to unmask]
On Tue, Sep 13, 2011 at 2:08 PM, Jason Stirnaman <[log in to unmask]> wrote:
> Does anyone have suggestions or recommendations for platforms that can aggregate usage data from multiple sources, combine it with financial data, and then provide some analysis, graphing, data views, etc?
> From what I can tell, something like Ex Libris' Alma would require all "fulfillment" transactions to occur within the system.
> I'm looking instead for something like Splunk that would accept log data, circulation data, usage reports, costs, and Sherpa/Romeo authority data but then schematize it for data analysis and maybe push out reporting dashboards <nods to Brown Library http://library.brown.edu/dashboard/widgets/all/ >
> I'd also want to automate the data retrieval, so that might consist of scraping, web services, and FTP, but that could easily be handled separately.
> I'm aware there are many challenges, such as comparing usage stats, shifts in journal aggregators, etc.
> Does anyone have any cool homegrown examples or ideas they've cooked up for this? Pie in the sky?
>
>
> Jason
> Jason Stirnaman
> Biomedical Librarian, Digital Projects
> A.R. Dykes Library, University of Kansas Medical Center
> [log in to unmask]
> 913-588-7319
>
|