VisLR II: Visualization as Added Value in the Development, Use and Evaluation of Language Resources
Monday, 23rd of May 2016 (morning session, 9.00-12.40am)
First Call for Papers
This workshop aims at providing a follow-up forum to the successful first VisLR workshop at LREC 2014, which addresses visualization designers and users from computational and linguistic domains likewise. Since the last workshop, the concern with visualizing language data has further increased, as the recurrence of specialized symposia in the linguistic and NLP contexts show (cf. e.g. ACL workshop 2014, AVML 2014, Herrenhäuser Symposium 2014, QueryVis 2015). Moreover, the application of visualization techniques to various use cases is becoming ever more agile. However, the majority of linguistic visualization applications still mainly allow for the investigation of one feature at a time, e.g. word co-occurrence, topic similarity and the like. In the light of language data being highly complex, it would be more desirable to have visualization systems that combine multiple dimensions and represent the dependencies between them. This opens possibilities for a more informed analysis of language data, as for instance shown in Gold et al. 2015, who use visual analysis in order to determine how and if participants argue in a negotiation.
As a specialized subfield of information visualization, the visualization of language continues to face particular challenges: Language data is complex, only partly structured and, as with todays language resources, comes in large quantities. Moreover, due to the variety of data types, from textual data to spoken or signed language data, the challenges for visualization are necessarily varied. The overall challenge lies in breaking down the multidimensionality into intuitive visual features that enable an at-a-glance overview of the data. The second edition of the workshop therefore aims at advancing the field of linguistic visualization by particularly focusing on more advanced visualization techniques that represent the complexity of language and that contribute to resolving them.
We invite submissions on research demonstrating the development, use and evaluation of visualization techniqes, with a particular focus on representing the multidimensional characteristics of language in order to arrive at ever more sophisticated visual language tools. This includes work applying existing visualization techniques to language resources as well as research on new visualization techniques that are specifically targeted to the needs of language resources. Particular consideration is given to papers aiming at the interoperability of the described visualization techniques.
Topics include, but are not limited to:
Papers must describe original (completed or in progress) and unpublished work. We invite long papers (8 pages, including references).
Submission is done via the LREC 2016 START page (https://www.softconf.com/lrec2016/VisLRII)
We use a blind reviewing process, in which the author names are not revealed to the committee of reviewers. Authors are requested to prepare their manuscripts in a manner which disguises their identities, affiliation status, etc. This means (1) omitting names and affiliations from the title page; (2) refraining from excessive self-citation in the bibliography; and (3) omitting explicit references the authors' previous work in the text body.
Please indicate possible conflicts of interest on the submission page. Conflict of interest exists when an author (or the author's institution), reviewer, or editor has financial or personal relationships that inappropriately influence (bias) his or her actions (such relationships are also known as dual commitments, competing interests, or competing loyalties).
Style guidelines for the camera-ready papers are provided on the main conference page: http://lrec2016.lrec-conf.org/en/submission/authors-kit/ We recommend to make use of these templates also for first submissions.
lrec.vislr AT gmail.com
Link to LREC main conference page