Print

Print


I would also be interested to hear the results of this, Stuart.

Not that it adds very much more to what everyone has already provided, but
I remember bookmarking this page from Annotations work at Harvard:
http://www.annotations.harvard.edu/icb/icb.do?keyword=k80243&pageid=icb.page466612

Seemed like a good primer, though it might not reflect the most current
work.

There was also this which I bookmarked around the same time:
https://code.google.com/archive/p/annotation-ontology/

Again, not too sure if these are all abandoned projects, but at least gives
a sense where things were at some point in the not too distant past.

On Fri, Mar 18, 2016 at 4:04 AM, Erwin Verbruggen <
[log in to unmask]> wrote:

> Hi Stuart, all,
>
> Very interested in the IIIF-developments as well. A colleague from the
> University of Amsterdam recently did a post on Digital Film Historiography
> <
>
> http://filmhistoryinthemaking.com/2016/03/16/update-digital-film-historiography-a-bibliography/
> >
> and when I asked about the tools in reference to this conversation replied:
>
> Anvil was used by Adelheid Heftberger in the Digital Formalism project in
> > Vienna with really good results. In addition, the French tool Lignes de
> > temps developed by IRI at the Pompidou center has been used by several
> film
> > scholars and in education on several levels for video annotation, (it
> also
> > exists in English) and I think it might be relevant/useful for the
> purposes
> > described though it is not web-based from what I can see:
> >
> > http://www.iri.centrepompidou.fr/outils/lignes-de-temps/
> >
> > Stuart, hope all this brings you somewhat further to your original goal -
> would be curious to hear the results of your quest.
>
> Kind regards,
> Erwin
>
> On Thu, Mar 17, 2016 at 5:31 AM, Tom Cramer <[log in to unmask]> wrote:
>
> > Stuart,
> >
> > It may be useful to also cross-post this question to the IIIF-discuss
> list
> > [1]. There is a lot of interest in developing a IIIF-like approach to
> > presenting video via a common API, and one that lends itself to web-based
> > annotation. This would allow theoretically allow users to annotate videos
> > with their tool of choice, and to be able to reuse / export the
> annotations
> > to any other tool.
> >
> > I expect this will be a topic at the next IIIF meetings, in New York City
> > (May 10-13, 2016). [2]
> >
> > - Thomas
> >
> >
> > [1] [log in to unmask]<mailto:[log in to unmask]>
> > [2] http://iiif.io/event/2016/newyork/
> >
> > On Mar 16, 2016, at 8:33 PM, Greg Lindahl <[log in to unmask]<mailto:
> > [log in to unmask]>> wrote:
> >
> > This may or may not be relevant to the "annotation" that the original
> > poster had in mind, but the Internet Archive embedded video player
> > takes subtitles in the common SubRip .srt format, which is apparently
> > supported by many video players & subtitling programs.
> >
> > Instead of using this for closed captioning, you could use it for
> > annotations. Each video can have multiple .srt files, with the user
> > being able to pick which one is shown. I'm not 100% sure if our embed
> > code allows the embedder to choose one .srt to be shown by default,
> > that's where my knowledge ends.
> >
> > https://archive.org/help/video.php
> > https://en.wikipedia.org/wiki/SubRip
> >
> > -- greg
> >
> > On Wed, Mar 16, 2016 at 02:06:46PM +0100, Gregory Markus wrote:
> > Hi Stuart,
> >
> > A colleague of mine has just recently recommended Clipper (
> > http://blog.clippertube.com/index.php/clipper-prototype-3/) they're
> > currently experimenting with it in the EUscreenXL project.
> >
> > Might be worth checking out for you as well.
> >
> > Curious as to what others will suggest as well.
> >
> > Cheers,
> >
> > greg
> >
> > On Tue, Mar 15, 2016 at 11:11 PM, Andrew Gordon <[log in to unmask]
> >
> > wrote:
> >
> > Thanks for sending out that document, Erwin.
> >
> > This is a really interesting topic and I feel like video annotation on
> the
> > web should be more of a thing.
> >
> > On top of what Erwin already provided (OVA looks particularly like A
> > project that might be good to look at for your needs) there are also:
> >
> > http://mith.us/OACVideoAnnotator/ - which is a proof of concept using
> the
> > open annotation specification (http://www.openannotation.org/). The
> > specification is format agnostic, intending annotatation of objects with
> > text, media, web resources etc. - the genius.com folks seem to be
> > involved.
> >
> > http://cowlog.org/ - pretty basic, but appears to get the job done and
> is
> > web based.
> >
> > There are scads of proprietary and open source desktop video
> > coding/annotating software that I will spare you the burden of going
> > through. Full disclosure, I work on a project whose sibling project is a
> > desktop video coding tool for psychology researchers.
> >
> > From my vantage point, video annotation software generally seems to be
> > developed around a specific set of user needs (a type of researcher and
> > research subject, for example). More specific target audience gets a more
> > robust set of tools targeted at those needs.
> >
> > The biggest issues come down to diversity of encoding for video and the
> > ability for operating systems to support the playback of them. This said,
> > the web has even more limitations around what video formats it will
> > support, but if you control the source of the video, this might not be
> such
> > a big deal.
> >
> > It would really be great to see video annotation for specifically DH
> > projects warm up.
> >
> > Have a look at all the resources and determine whether you think it might
> > be useful just to roll your own annotator using HTML5, some sophisticated
> > JS libraries for handling media, and hopefully wrapping in a standard
> like
> > the Open Annotation Data model (linked above).
> >
> > Would love to hear what others think/may have experienced.
> >
> > Drew
> >
> >
> >
> >
> >
> > On Tue, Mar 15, 2016 at 5:04 PM, Erwin Verbruggen <
> > [log in to unmask]> wrote:
> >
> > Dear Stuart,
> >
> > A few years ago we started an overview of video annotation projects and
> > tools for the EUscreen network. We haven't been able to turn it into a
> > state of the art document as of yet, but I'm hoping it would be useful
> > for
> > such an endeavour:
> >
> >
> >
> >
> https://docs.google.com/document/d/1t6CIL8oQjkAtUe2LGInrUgxpNzj5k9s17Mihz6UotIM/edit?usp=sharing
> >
> > Kind regards,
> > Erwin
> >
> > Erwin Verbruggen
> > Project lead R&D
> >
> > Netherlands Institute for Sound and Vision
> > Media Parkboulevard 1, 1217 WE  Hilversum | Postbus 1060, 1200 BB
> > Hilversum | beeldengeluid.nl
> >
> >
> > On Tue, Mar 15, 2016 at 9:38 PM, Stuart Snydman <[log in to unmask]>
> > wrote:
> >
> > I am doing some discovery for a DH project that, at its center, needs
> > to
> > annotate digital video (locally produced videos that will be hosted and
> > streamed on the web in our local environment).  We are still gathering
> > requirements, but it needs to:
> >
> >
> >  *   have a user friendly interface for creating annotations, better
> > on
> > the web but not an absolute requirement
> >  *   create annotations at specific timestamps, or across spans of
> > time,
> > and have those annotations associated with regions of the video image.
> >  *   annotations could include, text, audio, video, image, URL, etc.
> >
> > We’d prefer open source solutions that can be integrated into a web
> > app,
> > but aren’t fully closed to alternatives.  We’d strongly prefer a
> > solution
> > that supports open standards for annotation or is at least capable of
> > supporting open standards.
> >
> > I know there are many, many video annotation projects.  What is the
> > current state of the art in web-based video annotation making and
> > viewing?
> >
> > Many thanks,
> >
> > Stu
> >
> >
> >
> >
> >
> >
> >
> > --
> >
> > *Gregory Markus*
> >
> > Project Assistant
> >
> > *Netherlands Institute for Sound and Vision*
> > *Media Parkboulevard 1, 1217 WE  Hilversum | Postbus 1060, 1200 BB
> > Hilversum | *
> > *beeldengeluid.nl* <http://www.beeldengeluid.nl/>
> > *T* 0612350556
> >
> > *Aanwezig:* - ma, di, wo, do, vr
> >
> >
> ᐧ
>