vrijdag 25 mei 2012

Attacking publishers will not make open access any more sustainable | Graham...

 
 

Sent to you by Frouke via Google Reader:

 
 

via Education: Higher education | guardian.co.uk by Graham Taylor on 5/25/12

Publishers fully support expanding access to publicly funded research but only as part of a model that is financially viable

Much has been written about journal publishers over the past few months but unfortunately this has focused almost exclusively on one side of the debate: the desire for greater access to peer-reviewed research outputs, especially journal articles, which publishers are painted as somehow resisting and restricting.

To be clear from the outset, we fully support expanding public access to publicly funded research. One only has to look at what has happened over the past year, especially since the publication of the government's Innovation and Research Strategy for Growth and the convening of the Finch Group to recommend strategy for extending access to global research for UK researchers. This is a process in which publishers have engaged fully and are making great strides to promote enduring, financially sustainable, open access models.

We also made a significant announcement on 2 May – on the day that the universities and science minister, David Willetts, came to speak at the Publishers Association AGM – that publishers are exploring fee-waived walk-in access via the public library network. As the minister pointed out, this proposed PA initiative would be a very useful way to extend public access to research outputs currently only available through subscription.

This is not merely words: a working group of journal publishers and public librarians is taking this work forward on behalf of the PA. A preliminary technical report should be available by mid-June with the objective of enabling access by the end of the year. This facility is already available through university libraries, although whether these libraries choose to allow walk-in access is a matter for them.

Much of the focus of this debate has been on the value of peer review and the role that scholars and researchers play in this process. By implication publishers are perceived as contributing very little, other than simply assembling articles into journals and pushing them onto cash-strapped libraries to make a gargantuan profit.

That is a gross distortion of reality. The publishing process involves: soliciting and managing submissions; managing peer review; editing and preparing scripts; producing the articles; publishing and disseminating journals; and of course archiving. And the end result acts as a calling card and mark of quality, helping readers find content that is relevant to them and is trusted. At a time when we are looking for an export-led recovery, UK-based scholarly publishers account for over £1bn in export sales.

Perhaps most important of all, from an access point of view, is the amount publishers have invested in platforms that support researchers in numerous ways. These include investments in article enhancement, visualisation, social networking, and mobile technology; valuable tools such as searchable image databases, navigation, alerts and citation notifications, and reference analysis. Publishers are also working on text-mining tools; linking to the datasets behind journal articles; and research performance measurement tools such as SciVal.

These are all part of the academic ecosystem and are provided by publishers, not to mention that almost 100% of journals are available electronically - created, digitised, structured, tagged and disseminated by publishers. But it seems to be much easier to belittle the role of publishers than to have a serious look at what is being provided.

The debate about the cost of journals is made difficult by the fact that there are wide variations across the industry, and of course competition issues debar any collaboration. However, in 2010 – the last year for which Society of College, National and University Libraries data are available – UK universities had access to 2.42m journal subscriptions, an increase of 93% over 2006. The price paid for these subscriptions, £134m, increased by only 31% over the same period, so the price paid per journal accessed actually fell by 32%.

In 2010, universities spent 0.54% of their total institutional expenditure on subscriptions to journals and 20% of their library budget, which in turn was 2.7% of total institutional expenditure. Journal collections or "big deals", though often criticised, have contributed significantly to this reduction in unit costs by enabling the most popular material to be sold at a lower price with an added extra slice of research material on top. And of course libraries can choose either to subscribe to these broad collections (against substantial discounts) or to purchase individual titles.

It is clear, however, that further efficiencies can be made, for example in the peer review process. This is why publishers run peer review innovation projects. So far there seems to be no alternative to the view that pre-publication review by selected experts should sustain the production and dissemination of high-quality science over the longer term. This may, of course, change over time and publishers will continue to encourage innovation in peer review practices.

Given the budgetary challenges that libraries face, the profit margins of some of the larger publishers are portrayed as a moral affront. Unfortunately, publishers seem to be part of a broader backlash against perceived corporate greed and abrogation of social responsibility. But publishers are entitled and need to make a profit. Profits derive from efficiency, profits fund investment and drive innovation, and profits are taxed – which provides the public money to fund research. Scholarly publishers support 10,000 jobs in the UK and we are significant net revenue earners for the UK. The members of the Publishers Association pay more in taxes to the UK exchequer than all UK universities collectively pay to all publishers globally for access to their journals.

Clearly the costs of publishing services must be met somehow, and these are of course in addition to the costs of doing the research itself. If we lived in a world where all such services were paid for prior to publication, then all research content could be made freely available. But we do not, or do not yet, live in such a world. A similar point can be made about the transparency of contracts: there is no UK legislation that interferes in commercial contracting between two businesses. Companies are perfectly entitled to negotiate terms and conditions on a case-by-case basis and to negotiate those terms in confidence.

To reiterate, scholarly publishers are happy to work with any long-term financially viable business model for publishing services. We are happy to work with models where funding is provided on the author-side or the user-side of the publication process, or hybrids of the two. By contrast, mandated deposit in repositories is not a publishing model, has no associated revenue stream and, worse, threatens to erode the revenues deriving from the subscriptions on which the model depends.

Publishers have nevertheless said that we are happy to work with this "green" approach in combination with viable publishing models such as funded ("gold") open access or subscription, provided that the time gap (the "embargo period") between first publication and availability in a repository does not fatally undermine revenue streams. We are ready to work with funding bodies, government agencies, researchers, librarians and other stakeholders of all kinds to expand access in sustainable ways. But that's just it - they need to be viable in the long term. Attacking publishers will not make open access any more sustainable. We all need to work together to achieve this.

Graham Taylor is director of academic, educational and professional publishing at the Publishers Association


guardian.co.uk © 2012 Guardian News and Media Limited or its affiliated companies. All rights reserved. | Use of this content is subject to our Terms & Conditions | More Feeds


 
 

Things you can do from here:

 
 

dinsdag 22 mei 2012

New Optical Illusions Expose More Foibles of the Brain

 
 

Sent to you by Frouke via Google Reader:

 
 


New Optical Illusions Expose More Foibles of the BrainDozens of newly discovered optical illusions competed for the title of "Best Illusion of 2012" last week at the annual meeting of the Vision Sciences Society in Florida. An illusion known as the "disappearing hand trick," which causes people to feel as though their hand has vanished, earned the top prize at the eighth annual contest.



 
 

Things you can do from here:

 
 

Recognizing recurrent neural networks (rRNN): Bayesian inference for recurre...

A neural networks implementation of predictive coding (also published in biol cybernetics)

 
 

Aan u verzonden door Sander via Google Reader:

 
 

via q-bio.NC updates on arXiv.org door <a href="http://arxiv.org/find/q-bio/1/au:+Bitzer_S/0/1/0/all/0/1">Sebastian Bitzer</a>, <a href="http://arxiv.org/find/q-bio/1/au:+Kiebel_S/0/1/0/all/0/1">Stefan J. Kiebel</a> op 22-1-12

Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.


 
 

Dingen die u vanaf hier kunt doen:

 
 

vrijdag 11 mei 2012

Surprised at all the entropy: hippocampal, caudate and midbrain contribution...

I don't know whether it is reassuring or just very puzzling to see (perceptual) prediction errors turn up subcortically. Where's the distinction?

 
 

Aan u verzonden door Sander via Google Reader:

 
 


PLoS One. 2012; 7(5): e36445
Schiffer AM, Ahlheim C, Wurm MF, Schubotz RI

Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.


 
 
 
 

Opposite Modulation of High- and Low-Level Visual Aftereffects by Perceptual...

For predictive coding and diamond lovers.

 
 

Naudojant „Google Reader" atsiųsta jums nuo Jonas:

 
 

per CURRENT BIOLOGY 12.5.9

Dongjun He, Daniel Kersten, Fang Fang. A fundamental task of visual perception is to group visual features—sometimes spatially separated and partially occluded—into coherent, unified representations of objects. Perceptual grouping can ....

 
 

Veiksmai, kuriuos dabar galite atlikti:

 
 

donderdag 3 mei 2012

Perceptual organization of shape, color, shade, and lighting in visual and p...

 
 

Sent to you by Frouke via Google Reader:

 
 

via i-Perception by Pion on 5/3/12

The main questions we asked in this work are the following: Where are representations of shape, color, depth, and lighting mostly located? Does their formation take time to develop? How do they contribute to determining and defining a visual object, and how do they differ? How do visual artists use them to create objects and scenes? Is the way artists use them related to the way we perceive them? To answer these questions, we studied the microgenetic development of the object perception and formation. Our hypothesis is that the main object properties are extracted in sequential order and in the same order that these roles are also used by artists and children of different age to paint objects. The results supported the microgenesis of object formation according to the following sequence: contours, color, shading, and lighting.

 
 

Things you can do from here: