Research Science and Libre Computing
A Scientist's Perspective
I am a Ph.D. student at U.C. Davis studying hearing loss, auditory selective attention, and auditory neuroprostheses. My work is a cross collaboration between the Miller Lab, located in the Center for Mind and Brain and led by Dr. Lee Miller, and RASCAL (Robotics, Autonomous Systems, and Controls Laboratory), located on the U.C. Davis main campus and led by Dr. Sanjay Joshi.
By day I spend my time working, researching, and conducting experiments, and on the side I like to spend my time in and around open source software and technology, tinkering with computers and electronics, and I consider myself to be an exceptionally mediocre woodworker.
In science, there are specific requirements and use-cases that are required for research, that are seldom directly addressed in libre and non-libre software. Traditionally computers are used in four primary ways in research science; 1) Developing and preparing media and materials for a research project, 2) Conducting a research study, i.e. stimuli delivery, and data collection, 3) data and statistical analysis and processing, and 4) writing for publication, presentations and grant funding. In each of these aspects of science, the needs and demands of the software and hardware can differ significantly, and a science computing platform must adapt to address these needs in turn. However, first and foremost of these requirements is the need for utmost stability and consistency. In psychophysics and many other scientific fields, when using a computer to conduct a study, often constant performance is more crucial to the science than ‘absolute’ performance. While it is always ideal to perform a task as quickly as possible, predictability is a vastly more desirable trait. If there is one thing I’ve learned about scientists, they absolutely hate when things act without some semblance of consistency, i.e. they hate when things change. Fast moving technologies tend to be avoided like the plague to research science, and, like in data-centers, science tends to err on the side of caution (or misperception of caution). Hence why it is not uncommon to see Windows 2000/XP, RHEL4/5, Ubuntu 8.04, or Mac OS 9 in many respectable research establishments, and many scientific machines will never have internet access or networking of any kind. The one defining quality that all of these releases had in common, was that they could consistently do the same thing over and over without much variance. Granted, for some of these platforms they consistently do the same thing over and over badly without variance. But, like stated before, a scientist’s mindset is that they would rather live with a software bug that they new 100% of the time would show up, rather than risk the uncertainty that in trying to fix a software bug the developers may have changed something vital to their research or workflow. My talk will look at what qualities expensive, proprietary software platforms like MATLAB have that are lacking in libre applications, both good and bad. And my goal is to point out where, at least in my field, the current paid-software options are severely lacking, and how libre software is stepping up and successfully filling the void.
- 2018 September 8 - 16:15
- 45 min
- Libre Application Summit
- State of the Application Ecosystem