Open Question: Name an angiosperm plant which neither fruit nor flower?


All angiosperms produce flowers and fruit, although some might not be things you would see as flowers or fruit unless you are a botanist.

Grass flowers and produces their fruit a botanist calls a caryopsis and most people call grain.

Wind pollinated trees have catkins or similarly nonflower-like inflorescences. Alders (Ulnus spp) have female ‘cones’ and tassel-like male catkins as their inflorescences. These are flowers as they have an enclosed ovary protecting the female ovule.

Maple tree fruit are the bladed samaras that twirl and glide through the air.
http://faculty.fmcc.suny.edu/mcdarby/animals&plantsbook/Plants/05-Angiosperms.htm#seeds,%20fruits,%20etc

Cottonwood poplars (Populus deltoides) produce the white fluff that bears the fruit away with the wind. http://www.cas.vanderbilt.edu/bioimages/species/pode3.htm#Flower

View the original article here

This planet obeys the law—stats on volcanic eruptions show pattern called Benford’s Law


Scientists delight in extracting order from chaos—finding patterns in the complexity of the real world that pull back the curtain and reveal how things work. Sometimes, though, those patterns create more head-scratching than excitement. Such is the case with Benford’s law. One might expect a collection of real-world data—say, the half-lives of various isotopes, for example—to pretty much look like random numbers. And one might further expect the first (non-zero) digit of each of those numbers to also be random (i.e. just as many 2s as 9s).

Oddly, one would (in many cases) be wrong. It turns out that 1s are more likely than 2s, which are more likely than 3s, and so on. Not only that, the probabilities match a logarithmic distribution, just like the spacing on a logarithmic scale. The number 1 will be the first digit about 30 percent of the time, 2 will occur nearly 18 percent of the time, all the way on down to 9 showing up only about 5 percent of the time.

Law-abiding citizens everywhere will be happy to know our planet also obeys Benford’s Law, with the duration and size of volcanic eruptions showing the same sort of pattern.

This strange phenomenon was first expressed in 1881 by an astronomer named Simon Newcomb. While using printed tables of logarithms, he noticed that the pages containing numbers that start with 1 were much more worn than the others. After thinking it out, he proposed that the occurrence of digits in the log tables in fact followed a logarithmic distribution themselves.

In 1938, the physicist Frank Benford rediscovered this idea, explored it more fully, and formalized the equation that describes it. He analyzed a number of data sets and showed that the relationship existed in the real world. It’s obviously not universal—it won’t be true of numbers in a telephone book, for example, which share assigned area codes and prefixes. Still, Benford’s law has held good for a truly bewildering variety of data sets, including the surface area of rivers, the specific heat of chemical compounds, mathematical constants in physics, baseball stats, street addresses, populations of US counties, and a number of mathematical tables and series. (Try a few more for yourself.)

Perhaps most famously, Benford’s law has been used to detect financial fraud. Folks who cook the books assume that random numbers will look inconspicuous, not realizing that’s exactly what can make them look conspicuous. Dodgy rounding will also cause a data set to stick out like a sore thumb and get you caught red-handed. (You can hear about an example in this episode of WNYC’s Radiolab.) It’s often been suggested that Benford’s law should be applied to the results of suspicious elections, but the relationship can be unreliable unless numbers span multiuple orders of magnitude.

The burning question that can get some people downright irritated with the whole business of Benford’s law is “why the hell should this be true?” No explanation is completely satisfying (unless you’ve got the fortitude for some mathematical heavy lifting), but a couple come close to de-spookifying the idea in at least some circumstances.

Think of a number starting with a 1. What would it take for it to start with a 2? Well, you’d have to double it. Now consider a number starting with a 9. An increase of only 10% will have it back to starting with a 1 again. And, of course, the process repeats—this number will have to be double again before it will start with a 2. For this reason, financial growth (such as investments) will follow Benford’s law quite faithfully.

Several years ago, researchers in the Earth sciences began taking an interest in seeing whether our planet’s behavior followed Benford’s law. A paper published in 2010 applied the analysis to things like the length of time between geomagnetic reversals (when the Earth’s magnetic “north” pole flips to the opposite geographical pole), the depth of earthquakes, greenhouse gas emissions by country, and even the numbers of infectious disease cases reported to the World Health Organization by each nation. All of them showed a decent fit to Benford’s law. (As did some things out of this world, like the rotation frequencies of pulsars and masses of exoplanets.)

In a recent paper published in Geology, a pair of Spanish researchers extend this to three more data sets: the area and ages of volcanic calderas and the duration of volcanic eruptions between 1900 and 2009. This is more than just a bit of fun with numbers, as we’re past the point where Benford’s law needs confirmation. The goal is to use it as a sort of simple truth-check on databases of geologic data. If these things don’t follow Benford’s law, then it could be a sign that a data set is unrepresentative of reality or contains some sort of pervasive error or bias.

Benford’s law fit the eruption duration data very well. The fit for the caldera areas was pretty good, too, though a few digits differed just enough that the authors suspect some excessive rounding may have taken place. The caldera eruption ages, however, showed a marked deviation from Benford’s distribution. There were too many numbers starting with 2 and 3. When they looked closely, they saw this was due to a large number of North American calderas between 23 and 42 million years old.

As it turns out, this is a well-known anomaly. It’s not clear whether there was really an unusual cluster of calderas at that time or this is simply a case of one area being studied more intensely. Regardless, removing those calderas from the analysis returned the data set to harmony with Benford’s law. In essence, Benford’s law provided another way to show that those calderas are anomalous.

Because researchers often want to know whether the data they’re analyzing is a representative sample of the world at large, any technique that could help them do so is likely to get a serious look. The authors conclude, “Since the use of Benford’s law may serve as a simple and quick quality test of data, and provide new ways to detect anomalous signals in data sets, it could be used as a validity check on future databases related to volcanoes.” In other words, before you go searching for patterns in a database, it might be prudent to make sure the database conforms to Benford’s pattern.

Geology, 2012. DOI: 10.1130/G32787.1  (About DOIs).

View the original article here

Physicist uses math to avoid traffic penalty


A physicist faced with a fine for running a stop sign has proved his innocence by publishing a mathematical paper, and has even won a prize for his efforts.

Dmitri Krioukov is a physicist based at the University of California in San Diego. When faced with a court hearing over allegedly driving through a stop sign, he put together a paper called The Proof of Innocence, which he has since published. The abstract for the paper reads: “A way to fight your traffic tickets.” The paper was awarded a special prize of $400 that the author did not have to pay to the state of California.

Krioukov’s argument is based upon the premise that three coincidences happened at the same time to make the police officer believe that he had seen the physicist run a red light, when, in fact, he hadn’t. He writes: “[In this paper], we show that if a car stops at a stop sign, an observer, e.g., a police officer, located at a certain distance perpendicular to the car trajectory, must have an illusion that the car does not stop, if the following three conditions are satisfied: (1) The observer measures not the linear but angular speed of the car; (2) The car decelerates and subsequently accelerates relatively fast; and (3) There is a short-time obstruction of the observer’s view of the car by an external object, e.g., another car, at the moment when both cars are near the stop sign.”

As Physics Central explains, because the police officer was around 30m from the intersection where the stop sign was situated, “a car approaching the intersection with constant linear velocity will rapidly increase in angular velocity from the police officer’s perspective.”

The physicist even created graphs showing what would have happened to his angular velocity if he had either been driving at a constant linear velocity or had made a quick stop and then accelerated back to speed, which is what he claims happens (actually, he sneezed, causing him to brake harder than usual). It was during this sneeze stop that another vehicle obscured the police officer’s view of Krioukov’s car, argues the paper.

The conclusion of the paper? It isn’t the police officer’s fault but he/she was wrong as their “perception of reality did not properly reflect reality.” Bet that’s a statement the other officers loved to remind them of.

wired.co.uk

View the original article here

Delivery begins for first units of Raspberry Pi’s $35 Linux computer


The Raspberry Pi foundation has started shipping units of the much-anticipated $35 Linux computer. The organization has already started handing out the first units and conducting educational seminars with students.

The Raspberry Pi foundation was originally established with the goal of producing low-cost computers that students could use to learn computer programming. The project later attracted the interest of Linux users and embedded computing enthusiasts. The launch product is a bare board that is roughly the size of a deck of playing cards with a 700MHz ARM11 CPU and 256MB of RAM.

Faced with overwhelming demand for the product prior to the launch, the Raspberry Pi foundation decided earlier this year to transition to a licensed manufacturing model. They partnered with Premier Farnell and RS Components, hardware makers that are going to serve as retailers for the first batch of units and then take over manufacturing for all subsequent production.

Manufacturing on the first batch started in January, but completion was delayed due to an issue with one of the components. The first boards arrived in the UK at the end of March, but couldn’t be delivered right away due to compliance issues. The foundation finally announced on April 14 that deliveries have officially begun. A video that was published recently on the Raspberry Pi website shows founder Eben Upton hand-delivering a set of the $35 boards to RS Components. Consumers who ordered the board from RS and Farnell will reportedly receive updated delivery estimates soon.

View the original article here

Microsoft talks Windows 8 SKUs: Windows 8, Windows 8 Pro, and “Windows RT” for ARM


Microsoft has announced the main Windows 8 product line-up. There will be two retail editions for Intel-compatible processors (both 32- and 64-bit), Windows 8 and Windows 8 Pro; a third edition for ARM processors, Windows RT; an enterprise edition, Windows 8 Enterprise, for volume license customers; and finally, some number of local-language-only versions for China and other selected emerging markets.

The blog post containing the announcement tabulates the major differences between the three main consumer editions—Windows 8, Windows 8 Pro, and Windows RT. Windows 8 is positioned as the replacement for Windows 7 Basic and Home Premium. Windows 8 Pro is viewed as the replacement for Windows 7 Professional and Ultimate. Windows RT will be exclusively available as a pre-install on ARM hardware, with no direct retail availability.

Windows 8 and Windows RT have broadly matching feature-sets. As previously announced, Windows RT adds Word, Excel, PowerPoint and OneNote as built-in features and includes full-device encryption, which Windows 8 lacks. Conversely, Windows 8 includes support for existing x86 and x64 applications (naturally), Storage Spaces, and Windows Media Player.

Windows 8 Pro builds on Windows 8 to include support for BitLocker, domain membership, Hyper-V virtualization, Group Policy support, and certain other high-end features. No edition of Windows 8 will ship with Windows Media Center. It will, however, be available as an “economical” add-on to Windows 8 Pro.

Windows 8 Enterprise extends Windows 8 Pro to include various unspecified features to aid PC management, more complex security and virtualization scenarios, and “much more.”

We’ve asked Microsoft how the feature-set of the emerging market editions will compare, but the company has no comment at this time; it’s likely to serve as the replacement for Windows 7 Starter, and perhaps to a lesser extent Windows 7 Home Basic.

The new line-up is simpler than the Windows 7 line-up. While most consumers were never even offered the full range of Windows 7 options, the smaller set of SKUs should make purchasing simpler. One of the concerns often raised since the announcement of Windows on ARM processors is how Microsoft would inform consumers that this edition wouldn’t support existing x86 and x64 software. The decision to brand the ARM edition as something other than Windows 8 appears to be Microsoft’s answer to this conundrum. Whether “Windows RT” is sufficiently different from “Windows 8” in order to really set user expectations appropriately remains to be seen.

View the original article here

The Publication Paradox


This week is Open Access Week (follow #oaweek on Google+ and Twitter), and while I’ve shared a few links and talked to some of my officemates, I haven’t taken (or had, really) time to expand on my thoughts more fully. But I take Open Access very seriously, and I know the status quo (researchers signing over copyright to journals who lock away the research behind paywalls) won’t change unless more of us keep sharing openly to the widest audience possible. Because the antithesis of Open Access isn’t copyright — it is the unwillingness to share any ideas at all.

In the field of education, particularly in education policy, research is conducted and published in one of two ways: either by academics to submit to journals, or by think tanks and other groups who generally do non-university-based research. Academics will defend their system because their research is peer-reviewed, whereas much think tank research is not. In fact, in an effort to force a peer review process onto think tank research, the National Education Policy Center created the Think Tank Review Project, which includes reviews of think tank research and the annual Bunkum Awards. (Disclaimer: I know, work with, and take classes from various scholars at the NEPC.) If the academics are right, and their peer-reviewed research is superior, does that mean it is more influential? Hardly. According to this research by Holly Yettick (also affiliated with the NEPC), university-based education research is only cited about twice as often in major news outlets as research from think tanks, even though universities publish about 15 times more research (2009, emphasis mine). Yettick’s conclusions to this report include a recommendation to education reporters, urging them to consider more sources because “Unlike think tank employees, university professors generally lack the incentives and resources to conduct public relations campaigns involving outreach to journalists” (p. 15). My question is this: How does copyright and traditional publishing affect this incentive structure, and how can open access change it?

First, imagine you work at a think tank and you’re proposing research. Even before writing a word, you probably have an audience in mind that you’d like to reach with your work. Once your research is approved, you go about the research process and publish a report. Because the think tank does the researching and the publishing, no transfer of rights are necessary — the work was a work for hire and copyright belonged to the think tank from the very beginning. Now the think tank can set about trying to promote the results to the research to the media and other interested audiences. They have an incentive to promote because the research, the publication, and the promotion is carried out by the think tank, an organized unit that includes you in its shared ownership of the work. This gives the think tank a collective interest in spreading their ideas.

Now imagine you’re a researcher at a university. You too have an audience in mind that you’d like to reach, but when your research is finished you submit your report to a peer-reviewed journal. In order for the journal to publish (or sometimes, even to consider) your article, you must transfer to them your copyrights. The journal now owns the report, and this is where the incentive system starts to break down. The article might be read by your peers, and may help you receive tenure, but surely (I hope) your peers and tenure committee don’t comprise the true scope of your target audience. If you, the researcher, are still intent on making sure your work reaches the intended audience, how effectively can you promote something you no longer own? Most efforts to share your report will violate the publisher’s copyright. You could create derivative work, in the form of conference presentations, blog postings, or articles for magazines, but this actually requires extra effort to avoid a copyright violation, impedes future progress on other research, and often does not count towards tenure.

Instead of self-promotion, can you, a researcher, count on a journal to promote your work? Why would they? Do they know the scope of the audience you would like to reach? What incentives does the journal have to promote work they did not create? The journal wants subscribers, to be sure, but because they have no rights to your future research (or that of any scholar), their main incentive is to preserve a system that positions their journal as one of the few credible outlets for research. For example, the American Education Research Association has 25,000 members and publishes six peer-reviewed journals. If you’re an education researcher, you probably belong to AERA and you respect and read the scholarship in their journals. But in Holly Yettick’s dissertation research, searching through “nearly forty thousand articles in hundreds of publications” (2011, par. 14), she has yet to see a single AERA-published article mentioned anywhere. So while you might hear Brian Williams start a story on the NBC Nightly News with the phrase, “A new study published in the journal Science…,” you won’t hear an equivalent statement mentioning an AERA journal, despite education getting plenty of attention from NBC.

Think tanks have an advantage because the shared ownership of the creation and publication of research creates a common incentive for promotion. Even if the research is lower quality, the spread of the research to a wide audience gives the research power and influence. The traditional system of university-based researchers transferring rights to publishers in exchange for publication might produce higher-quality work, but leaves us with a publication paradox: how do creators promote something they don’t own, and how do owners promote something they did not create?

I see two options for improving the incentives to promote academic research: (a) publishers should own creation, or (b) creators should maintain ownership (or at least rights to open distribution). Option (a) essentially turns a publisher into a think tank, and would not fit with academia’s culture of academic freedom and independence. Some universities host their own journals, but they do not do so for the purpose of sponsoring and publishing their own work. Furthermore, most university researchers don’t want their work to be seen as “work for hire.” Option (b), which is not without its challenges, is the better option, and the growing Open Access movement is making it a more viable option every day. But for it to be successful, researchers are going to have to support change — not for selfish reasons, and not out of spite for publishers, but to ensure the best research is freely available to the audience for which it was intended.

?
Yettick, H. (2009). The research that reaches the public: Who produces the educational research mentioned in the news media? (p. 37). Boulder and Tempe: Education and the Public Inerest Center & Education Policy Research Unit. Retrieved from http://nepc.colorado.edu/publication/research-that-reaches

View the original article here

Leaked Office 15 video hints at SkyDrive, 365 integration


Following a leaked roadmap of Microsoft’s upcoming Office 15 productivity suite last week, a new video has been posted hinting at SkyDrive and Office 365 integration as well.

Shared by Rafael Rivera of Within Windows earlier today, the video depicts a typical morning commute by car or train, where files stored in Microsoft cloud can be accessed “wherever you go … so it’s easy to pick up where you left off.” Though this functionality currently exists in Office 2010, it’s likely Microsoft is looking to put its cloud services front and center with the latest release. For tablet deployments of Windows 8, this could be the company’s answer to Apple’s iCloud/iWork document sync.

Office 15 is rumored for release in 2013, with a public beta coming this summer.

Office 15’s “first run” introductory video.

View the original article here

Tell the White House taxpayers should have access to the results of the research we fund – Act by Jan. 2


The opportunity

As part of the process of fulfilling Section 103 of the 2010 America COMPETES Act, the White House Office of Science and Technology Policy (OSTP) has issued a Request for Information (RFI), asking individuals and organizations to provide recommendations on approaches for broad public access and long-term stewardship to peer-reviewed scholarly publications that result from federally funded scientific research. The RFI poses eight multi-part questions.

The full text of the RFI may be found at: http://www.gpo.gov/fdsys/pkg/FR-2011-11-04/html/2011-28623.htm

NOTE: A second RFI has also been issued on the topic of public access to digital data. SPARC/ATA will coordinate with allied organizations including ARL and CNI to formulate a response.

Who should respond?

It is urgent that as many individuals and organizations as possible – at all levels – respond.

For reference, the RFI specifically calls for comments from “non-Federal stakeholders, including the public, universities, nonprofit and for-profit publishers, libraries, federally funded and non-federally funded research scientists, and other organizations and institutions with a stake in long-term preservation and access to the results of federally funded research.”

If you can’t answer all of the questions, answer as many as possible – and respond to questions as directly as possible.

Organizations beyond the U.S., with experience with open-access policies, are also invited to contribute.

How the results will be used

The input provided through this RFI will inform the National Science and Technology Council’s Task Force on Public Access to Scholarly Publications, convened by OSTP.

OSTP will issue a report to Congress describing:

Priorities for the development of agency policies for ensuring broad public access to the results of federally funded, unclassified research;The status of agency policies for public access to publications resulting from federally funded research;Public input collected.

Taxpayers paid for the research. We deserve to be able to access the results.

The main point to emphasize is that taxpayers are entitled to access the results of the research our tax dollars fund. Taxpayers should be allowed to immediately access and fully reuse the results of publicly funded research.

To discuss talking points in further detail, don’t hesitate to contact us.

How to respond

The deadline for submissions is January 2, 2012. Submissions should be sent via email to publicaccess@ostp.gov. Please note: OSTP will publicly post all submissions after the deadline (along with names of submitters and their institutions) so please make sure not to include any confidential or proprietary information in your submission. Attachments may be included.

As ever, thanks for your commitment to public access and the advancement of these crucial policies.

If you have any questions or comments, don’t hesitate to contact:

Heather Joseph
Executive Director, SPARC and spokesperson for the Alliance for Taxpayer Access
heather [at] arl [dot] org

Jennifer McLennan
Director of Programs and Operations, SPARC & the Alliance for Taxpayer Access
jennifer [at] arl [dot] org

View the original article here

R2RC Launches New Open Publishing Guide for Students


The Right to Research Coalition has announced a new student guide to publishing openly, entitled “Optimize Your Publishing, Maximize Your Impact.”  This new resource presents students with the ways in which they can make their research openly available for the widest possible readership and lays out the benefits of doing so – both as authors and as readers.  How do you know where to submit your manuscript?  What are the factors that go into deciding the most appropriate publication outlet?  Which journal will give your article the widest audience? Where to publish is too important of a decision to put off until the end of the research process.

In addition to information on open-access journals, repositories, and authors’ rights, the guide includes a publishing choices decision tree outlining the different opportunities to make an article openly available throughout the publication process.  The publication process can be complicated, and an article can still be made openly available even if it’s published in a subscription-based journal.  The decision tree lays out all of the options, so students understand the flexibility they have when deciding to make their work openly accessible.

While there are many general, how-to resources for open publishing, this guide is specifically tailored to address students’ concerns when it comes to publishing an article and launching their research career.  From how to approach a research advisor about Open Access to the dividends the open access citation advantage can pay when launching a career, students are in a unique position when it comes to deciding to publish openly.

The new resource is also designed to be flexible. Not only can students use it to educate themselves and their peers about open publishing choices, but faculty can also use it to start the conversation with their students.  And, librarians can integrate it into their scholarly communication programs, especially during library orientation for new students.  There is also space on the final page for the guide to be localized to a particular institution and include information on a campus’ institutional repository or open-access policy.

Today’s students are tomorrow researchers, and this guide will help students make informed decisions about how and where to publish their work for maximum impact.

The Right to Research Coalition’s open publishing guide was produced with generous support from the Open Society Foundations.

Cross-posted from our blog at: http://www.righttoresearch.org/blog/r2rc-launches-new-open-publishi…

View the original article here

Dutch Malaria Foundation supports Open Access


The Dutch Malaria Foundation was founded in 2010. We are committed to a world without malaria. We want to achieve this by combining integrated, responsible pest control with innovative applied research, education and information. We see a need to fight malaria.on two fronts, not only the protection of people but also the control of the mosquito that carries the parasite. With a combination of vector control, education and innovative research, it will be possible to effectively fight malaria. After all  this is the way that the disease has been eradicated in Europe, America and other affluent parts of the world..

Education and innovative research are dependent on sharing of information. Participation of scientists from developing countries is essential in the fight against malaria. And Open Access to information is essential for many scientists in the developing world to be able to participate fully in the global scientific community. We therefore are strong supporters of the Open Access publishing system.

In order to make the scientific literature better accessible for scientists in the developing world we run the website MalariaWorld that provides weekly updates on the malaria literature. Weblinks provide easy access to malaria research papers. We collaborate with Elsevier to promote open access. The site also serves as a social network for currently >6,500 users working in the field of malaria, and offers possibilities for blogging and forum discussions.

We are developing an Open Access 2.0 malaria journal, where next to free reading also publication of articles will be free of charge. In our view this offers the best opportunities for scientists in developing countries to not only read but also be read.

View the original article here

DSpace Open Access repository development in Africa: Uganda, Zambia, Zimbabwe



PART FIVE:
Uganda, Zambia, Zimbabwe This is the fifth of a five-part series that looks at Open Access repository development in twelve African countries in celebration of Open Access Week Oct. 24-30, 2011. The first part (Botswana, Ethiopia and Ghana) may be found here: http://duraspace.org/dspace-africa-growing-open-access-knowledge-an… Parts two, three and four (Kenya, Malawi; Mozambique, Senegal; Sudan, South Africa) may be found here:
http://duraspace.org/dspace-open-access-repository-development-afri…
http://duraspace.org/dspace-open-access-repository-development-afri…
http://duraspace.org/dspace-open-access-repository-development-afri…

The series is co-authored by Iryna Kuchma, Open Access Programme manager, EIFL (http://www.eifl.net/) and EIFL-OA country coordinators: Netsanet Animut, Addis Ababa University and Chair of the Consortium of Ethiopian Academic and Research Libraries, Charles Banda, Copperbelt University, Zambia, Aissa Mitha Issak, Universidade Pedagógica, Mozambique, Gloria Kadyamatimba, Chinhoyi University of Technology Library, Zimbabwe, Richard B. Lamptey, Kwame Nkrumah University of Science and Technology, Ghana, Fredrick Kiwuwa Lugya, Makerere University Library, Uganda, Reason Baathuli Nfila, University of Botswana Library, Rosemary Otando, University Nairobi, Kenya, Kondwani Wella, Kamuzu College of Nursing, University of Malawi and Carol Minton Morris, DuraSpace.

Makerere University Library became the first library in Uganda to set up an institutional repository called Uganda Scholarly Digital Library (USDL, http://dspace.mak.ac.ug/). Launched as a science repository but later changed to cover other disciplines, USDL has a total of 1,600 full text articles, reports, posters, and other scholarly materials.
Through Open Access organizations and groups like Consortium of Uganda University Libraries (David Bukenya, dbukenya@ucu.ac.ug), EIFL-OA (Fredrick Kiwuwa Lugya, flugya@mulib.mak.ac.ug) and support from partners like INASP, EIFL, Sida Sarec, and Carnegie Corporation of New York, academic and research libraries in Uganda have started to show interest in having  institutional repositories.
The Open Access initiative has been further strengthened through partnerships such as the Irish African Partnership for Research Capacity Building and the Database of African Theses and Dissertations (DATAD). Through its Open Access repository, the Irish African Partnership for Research Capacity Building (IAP) brings together universities of Ireland, Malawi, Mozambique, Tanzania and Uganda in a unique, high-level partnership to develop a coordinated approach to research capacity building in order to make an effective contribution to the reduction of poverty. With the support of the Association of African Universities (AAU) DATAD aims at improving the management and access to African scholarly work (theses and dissertations) thus putting Africa’s research output onto the mainstream of world knowledge.

Zambia Library Consortium (ZALICO) promotes Open Access in the country and builds capacities among its member organizations to set up and maintain Open Access repositories.
In 2011 Zambia Library Consortium (ZALICO) has organized a national Open Access Repositories workshop funded by INASP to explore DSpace software for repository building. Participants from 12 institutions attended, including: National Assembly, National Institute for Industrial Scientific Research, National Technology Business Centre, The National Science and Technology Council, National Institute for Public Administration (NIPA), University of Zambia, Bank of Zambia, Zambia Environmental Management Agency (ZEMA) formerly Environmental Council of Zambia (ECZ), Zambia Agricultural Research Institute, Copperbelt University, Mulungushi University and Tropical Diseases Research Center (TDRC).
Open Access repositories are being developed by the following institutions: Copperbelt University Library, National Science Technology Center (NSTC), The University of Zambia and National Assembly of Zambia.

In Zimbabwe OA initiatives have to a large extent been driven by university libraries through the Zimbabwe University Libraries Consortium (ZULC) with support from the International Availability of Scientific Publications (INASP) and EIFL.
All universities with the exception of the Catholic University, Great Zimbabwe University, Lupane State University and Solusi University have IRs at various stages of development. The major content of these repositories are journal articles, published conference papers, projects and dissertations, digital collections and past examination papers whose full texts are accessible on the universities’ local Intranets. Most collections are mounted on the Greenstone and/or DSpace platform. The University of Zimbabwe also provides book chapters, working papers, research reports and seminar papers. The repository is listed in the Directory of OA Repositories (OpenDOAR) and it is accessible on the internet.
University of Zimbabwe( UZ): The institutional repository (http://ir.uz.ac.zw/jspui/) was established in 2005 using DSpace software. It contains past exam papers, conference papers, staff publications, DATAD: abstracts of theses and dissertations, EDT–db: full text of electronic theses, book chapters, working papers, research reports and seminar papers. It is available through the internet. The UZ has the most successful institutional repository. It is well populated and it is accessible on the web. This is due to a number of factors. The UZ is the mother of all universities with a well documented research culture which attracts funding from donor organizations. It has a publishing house with a decent output. The UZ library personnel were the first to receive institutional repository training which they are now cascading to other libraries. It has a bandwidth of 27mb which is the envy of other universities. Its long history and location in the capital city makes it a favourite destination for the best librarians. The above factors have created a conducive environment for the implementation of a successful institutional repository at the UZ.
The University of Zimbabwe (UZ) library with financial support from EIFL has embarked on a campus wide Open Access (OA) Advocacy Campaign which will target the UZ management and administrative personnel and Deans of Faculties. The ultimate purpose is to advocate for the adoption of a campus wide OA policy. During OA week a one day workshop will be held for 20 UZ management staff (Executives i.e. Vice Chancellor, Pro Vice Chancellor, Registrar, and Deans of Faculties) in an endeavour to achieve management buy in on the concept of OA, with the hope of advocating for OA policy formulation and implementation in the near future. A series of workshops and presentations targeting teaching staff (chairpersons of departments and lectures) in all 10 faculties will be held by faculty librarians with the sole purpose of marketing and publicising of both the concept of OA and OA resources relevant to individual faculties. An advocacy video will be documented which will contain testimonies of local academics who have so far benefited from exposure on IR platform and other success stories. Overall the library is looking forward to the adoption of a University OA Policy, which will enable access to knowledge in support of teaching, learning and research at the UZ, and as such this prospective advocacy campaign will be a conducive platform to this vision.
Only the UZ’s IR is listed in the OpenDOAR. The rest are only available on Intranets for a number of reasons. Firstly, institutions are reluctant to mount their IRs on the Internet due to very limited bandwidth which limits connectivity. Secondly institutions are afraid of infringing intellectual property rights on some of the works in their IRs. At some institutions submission to the IR is done through the Research and Scholarship Committee to ensure compliance with intellectual property rights and to enhance submission.
Zimbabwean institutions are at an advanced stage of developing IRs. Most institutions have IRs running on their Intranets. Uploading IRs onto the net is only a matter of time for most institutions. The major constraint is fear of copyright infringement and lack of IR policies. Further training in these aspects would ensure expedited uploading onto the web and availability of Zimbabwean research to a wider global audience.

View the original article here

The Birth of JIDC…A new kind of Journal


In the beginning . . . there was . . . an Idea . . . JIDC

There is an old saying that “Success has a thousand mothers and failure has none”. JIDC, I am proud to say, has thousands of mothers, fathers, sons and daughters. Truly, thousands. The success is of JIDC is the fruit of the dedication and hard work of editors, mentors, proofreaders, page setters, reviewers, web designers, web wizards, translators,  and of course the authors who contribute their precious work to JIDC.

Interestingly, I am frequently asked how JIDC began. In a way it began overlooking a mountain in Bishkek, Kyrgyzstan, in May of 2006. A great number of my associates were attending a meeting—the first International Meeting of Infectious Disease in Central Asia, in Bishkek, Kyrgyzstan. We had many intense discussions on the problems facing scientists from developing countries attempting to publish in predominantly western journals and from these discussions evolved the unorthodox idea of a journal that was dedicated to scientists and infectious disease in developing countries.

Bishkek, Kyrgyzstan  AdvanTours Photo

Many of us had long recognized that scientists and infectious disease science from developing countries were dramatically underrepresented in journals published in western countries. The underlying  science from infectious disease clinicians and scientists, we believed, was of a high calibre, but often the writing and presentation within manuscripts were not.  The solution, we summarized, in the majestic scenery of Bishkek, was to provide assistance in the writing and presentation of data for scientists’ draft JIDC manuscripts.  We thus added to JIDC a mentor system to guide and aid authors from developing countries with both writing skills and manuscript organization.

But alas, finances presented the greatest hurdle for scientists to publish and for the JIDC to function. Many journals require a payment of sorts to be made for accepted manuscripts to be published. The average going rate of $3,000 USD in western journals is manageable by western scientists, but the amount is simply out of the reach for many scientists and clinicians in developing countries. In fact, this may represent nearly one half a year’s wages in some developing countries. The JIDC, we declared, must be free of fees for those who cannot afford them. JIDC today is open access, free to submit, and the publication fee is waived for those who cannot afford the modest fee of 200 euros. The financial burden of maintaining JIDC is shouldered by volunteers of JIDC and grants from foundations and organizations such as the Foundation of Bank of Sardinia, Sardegna Ricerche, the University of Sassari, Shantou University Medical College, the Li Ka Shing Foundation, and the University Health Network in Toronto, Canada. Our heartfelt gratitude goes out to these people and organizations.

Through the months and years that followed the Bishkek meeting, JIDC was able to attract the dedicated team that now manages submitted manuscripts, reviews manuscripts, edits manuscripts, and publishes papers. The success of JIDC is the success of the many people who have joined in this exciting and rewarding journey! As we look forward to our fifth anniversary in 2012, the future is in our hands and it is a glorious sunrise.

Salvatore Rubino, Editor in Chief humble servant…..

JIDC Website:  http://www.jidc.org/index.php/journal

JIDC Editorial Meeting 2011 in Stintino, Sardinia

View the original article here

Feature: Ars Technica system guide: Bargain Box April 2012


Since the early 2000s, the Ars System Guides have been helping those interested become “budding, homebuilt system-building tweakmeisters.” This series is a resource for building computers to match any combination of budget and purpose.

The Bargain Box (formerly the Ultimate Budget Box) is the most basic box we cover in the System Guides. As the lowest-price box in the guides, it lacks the sex appeal of its flashier siblings, and it has a host of competition today. Before it was just OEM pre-builts, then it was netbooks, now it’s tablets.

Still, there seems to be a place for a basic desktop system. These live on in strength in the office, where the vast majority of employees read e-mail, crunch spreadsheets, and stream training videos. At home, boxes like this are a convenient place to stash all the pictures from the family vacation, and a nice place to hold media that won’t fit on the (relatively) limited storage of the average tablet or cell phone. Tucked in the home office, or maybe even the core of a low-budget HTPC; many still have a legitimate need for a desktop.

There’s no pretense of other needs in the Bargain Box. It gets a reasonable amount of storage despite its low cost, and there’s no attempt at 3D ability outside of the basic level of performance found in the integrated graphics (IGP). It’s there to do the basic tasks with minimum fuss.

For the lowest-cost desktop possible: honestly, buying an OEM box makes sense.

Big OEMs like Dell, HP, Toshiba, Lenovo, and others all get volume discounts and economies of scale that the individual builder or even smaller OEMs can’t match. This holds particularly true with software. Paying for the OS is a big chunk of change in systems like these, and something that will significantly affect any builder.

The Bargain Box is probably more useful to such buyers (and potential builders) as a reference on what specs their pre-built system should meet.

For the enthusiast who insists on building his or her own box, though, a pre-built box isn’t a choice. Building it yourself, even a bargain system, is a must. The Bargain Box is aimed at them. When even a stripped-down Budget Box is too much, the Bargain Box is designed to provide an even lower-spec’d price point.

We do try to emphasize a few things we think are worth the money, particularly higher-efficiency power supplies (PSU) than are typically found in bargain-basement boxes, as well as USB 3.0. Neither may be critical, but if you’re building it yourself, they are nice things to consider for relatively minimal cost.

Tablets are the biggest change since the last update of the Bargain Box. They’re now powerful enough, light enough, practical enough, and have nice enough screens to handle everyday computing for a lot of consumers.

For a few things, though, users may want to keep a desktop around. Media has to be stored somewhere, and that may be on a computer. Photo processing (not just viewing) is still not ideal on a tablet, and there are lots of times where the virtual keyboard on a touchscreen is impractical.

The line is getting increasingly blurred, though. Tablets have keyboard docks, more processing power, lower costs, and increased use of the cloud for storage. Also, don’t forget the netbook; it occupies the same price point—actually a lower one than high-end tablets—yet packs a physical keyboard and a hard disk for bulk storage. Processing power is still a little light by desktop standards however, and relatively low screen resolution is a limiting factor for serious use.

The Bargain Box is the lowest-cost setup in the System Guides. It’s priced below even the Budget Box from the main three-box System Guide, sacrificing any pretense of gaming ability in favor of even lower cost and competence at only the most basic tasks. The target is sub-$500 (without OS) for the Bargain Box, including monitor, mouse, and keyboard.

While a low price, value-focused box is the goal, we did have a few priorities. A balance of processing power, storage, and two slight indulgences over the absolute-lowest-cost: USB 3.0 and a decent, high-efficiency power supply. All that media has to go somewhere, and too little processing power means the Bargain Box would be a chore to use, so those get some attention. USB 3.0 may be planning for the future, but it’s a future that is already well on its way. Finally, a high-efficiency power supply is a nice thing to have, both in terms of saving money in the long run and in terms of reducing the A/C load in the summer.

Saving a few more dollars could be done, but we feel the Bargain Box does the job as far as the lowest reasonably priced system possible without cutting too many corners.

Unfortunately, the operating system is a significant chunk of change in a $500 box. Windows 7 Home Premium is easily 15 to 25% of the budget, while an open-source OS such as Linux Mint or Ubuntu still lacks the traction (and the polish) on the consumer desktop that Windows has.

Due to the prevalence of Windows, it’s hard to ignore. So many users are familiar with it, particularly business ones, that non-Windows operating systems are not an attractive option. As noted, Linux tends to lack traction in the desktop market, and to non-geeks. The polish required never seems to quite be there in many eyes.

For those who do believe Linux is worth a try, don’t forget to look outside the mainstream full-on distros and into others, such as XFCE and E17 desktop environments. There are also specific ones for specific uses (should your needs match up), like the media center focused XBMC.

We cover two versions, one powered by AMD and one powered by Intel. Each has strengths and weaknesses: better CPU performance with Intel, better graphics performance with AMD.

GeIL 2x2GB (4GB) DDR3-1600 1.5v = $26.99Seagate 500GB 7200rpm = $79.99LG 22x DVD-RW = $16.99NZXT Source 210 = $39.99Seasonic SS-300ET 300W = $39.99Acer S201HLbd 20″ 1600×900 = $99.99Microsoft Wired Desktop 600 = $22.99Speakers (no specific recommendation) = $15Intel Pentium G620 (2.6ghz) retail = $69.99Gigabyte GA-H61MA-D3V = $69.99Total = $481.91AMD A4-3400 (2.7ghz) = $69.99Gigabyte GA-A75M-D2H = $79.99Total = $491.91

Differences in performance between the processor and graphics are very real, but performance in the grand scheme of things is still limited. Still, the differences might matter more to specific user types, so we discuss both.

Intel version: Pentium G620 retailAMD version: A4-3400 retail

AMD and Intel both offer different strengths for the Bargain Box. AMD offers markedly superior graphics performance (Anandtech), while Intel offers significantly better CPU performance and power consumption over the dual-core A4-3400.

On the AMD side, for a few bucks more, the triple-core A6-3650 competes much better with the Pentium G620 in CPU performance. It also crushes the Pentium G620 in GPU performance, but this starts the slippery slope of possible upgrades and more money. It is something worth considering given the small premium, but we leave that in the hands of individual builders.

Intel’s Pentium chips are somewhat handicapped with smaller caches, lower clockspeeds, no Turbo Boost, and the slower versions of Intel’s HD Graphics compared to their full-fledged Core i3/i5/i7 brethren. In spite of all that, it still takes a triple-core AMD chip to keep up with a dual-core Pentium. It’s not just that Intel’s Sandy Bridge architecture is good, but AMD has lagged that much on the CPU performance side. Much like AMD, a few bucks more buys faster Intel chips such as the Pentium G850, but this time, the performance gain is much more marginal. The CPU side is fast enough, but the graphics side is still slow (we don’t consider it worth it). In fact, stepping down to the dual-core Celeron G540 might be worth it if every last dollar counts—just avoid the single-core parts even lower in the lineup.

The Bargain Box is decidedly not intended for gaming, but AMD’s graphics performance advantage is substantial enough that it’s worth mentioning. Keep in mind that compared to any sort of remotely worthwhile discrete card, such as the Radeon HD 6770 or nVidia Geforce GTX 550 Ti, integrated graphics performance is best described as anemic. Even the Radeon HD 6570 is notably faster.

Lower-power CPUs such as the Intel Atom and AMD Brazos (aka E-350 APU, and its brethren) could be used, but they don’t save very much money in the Bargain Box, despite their significant hit in performance. Even for lightweight photo management or streaming 1080p video, we feel they’re a little bit too much of a hit for the relatively small savings they provide. Less memory could also be used, but memory is so cheap today… anything less seems silly.

Heatsink: make sure to pick up a retail boxed CPU. The included heatsink/fan is more than adequate.

As far as the actual processor choice in the Bargain Box, we consider both pretty valid. For the vast majority of office-bound or Mom/Dad/Grandparent-bound systems, we might prefer the Intel setup due to lower power consumption, but the all-around flexibility granted by AMD’s superior graphics performance is definitely worth considering.

Next: a closer look at the motherboard, memory, and sound options.

View the original article here

NML Institutional Repository and Eprints


NML Eprints repository was established in September 2009 with the objective of providing Open Access dissemination of scientific knowledge generated at CSIR-NML Jamshedpur, India. NML’s Eprints gateway has considerably enhanced its global visibility and its popularity has increased exponentially. Eprints@NML is registered with OAIister, Open DOAR, ROAR and indexed by search engines like – Google, Google Scholar, Base, Scirus etc. NML repository has achieved more than 30% annual increase in traffic, with over 1,75,000 hits per month and a cumulative total of over 10 million hits since inception. The number of hits has reached 2.24 lakhs from 130 countries in July 2011.

OA is really an excellent information services forum through which we can help all the community of people whether he/she is millionaire or student. Today many academic, research and industrial sector focusing on OA site like DSpace and Eprints etc. Presently, I am working on Eprints@NML as a Repository Administrator at our organisation and putting all the archival documents for the global visibility of the NML Research output and provide the benefits of it to all the information seekers in India and abroad with the prior permission of the authors/publishers.

View the original article here

First Announcement: Berlin 10 Open Access Conference to be held in Stellenbosch, South Africa


Stellenbosch University, in partnership with the Max Planck Society and the Academy of Science for South Africa, has the pleasure of announcing that the prestigious Berlin 10 Open Access Conference will be held in Stellenbosch, South Africa. This will be the first time that the Berlin Open Access Conference will be held in Africa. As is tradition with the conference, it will explore the transformative impact that open, online access to research can have on scholarship, scientific discovery, and the translation of results to the benefit of the public.

The Conference will be held at the Wallenberg Research Centre, Stellenbosch Institute for Advanced Study (STIAS). STIAS is situated on the historic Mostertsdrift farm in the heart of Stellenbosch.

Conference date: 7-8 November 2012

Pre-conference date: 6 November 2012

The theme, programme, speakers and other relevant information will become available in forthcoming announcements which will also be available on the conference website (http://www.berlin10.org).

View the original article here

Feds shutter online narcotics store that used TOR to hide its tracks


Federal authorities have arrested eight men accused of distributing more than $1 million worth of LSD, ecstasy, and other narcotics with an online storefront that used the TOR anonymity service to mask their Internet addresses.

“The Farmer’s Market,” as the online store was called, was like an Amazon for consumers of controlled substances, according to a 66-page indictment unsealed on Monday. It offered online forums, Web-based order forms, customer service, and at least four methods of payment, including PayPal and Western Union. From January 2007 to October 2009, it processed some 5,256 orders valued at $1.04 million. The site catered to about 3,000 customers in 35 countries, including the United States.

To elude law enforcement officers, the operators used software provided by the TOR Project that makes it virtually impossible to track the activities of users’ IP addresses. The alleged conspirators also used IP anonymizers and covert currency transactions to cover their tracks. The indictment, which cited e-mails sent among the men dating back to 2006, didn’t say how investigators managed to infiltrate the site or link it to the individuals accused of running it.

Prosecutors said in a press release that the charges were the result of a two-year investigation led by agents of the Drug Enforcement Administration’s Los Angeles field division. “Operation Adam Bomb, ” as the investigation was dubbed, also involved law enforcement agents from several US states and several countries, including Colombia, the Netherlands, and Scotland.

Lead defendant Marc Willem was arrested on Monday at his home in Lelystad, Netherlands, federal prosecutors said in a press release. On Sunday, authorities arrested Michael Evron, a US citizen who lives in Argentina as he was attempting to leave Colombia. The remaining defendants—Jonathan Colbeck, Brian Colbeck, Ryan Rawls, Jonathan Dugan, George Matzek, and Charles Bigras—were arrested at their respective homes in Iowa, Michigan, Georgia, New York, New Jersey, and Florida. Attempts to reach the men for comment weren’t immediately successful.

The 12-count indictment charges all eight men with conspiracy to distribute controlled substances and to launder money. Several of them are also charged with distributing LSD and taking part in a continuing criminal enterprise. Each faces a maximum sentence of life in prison if convicted.

The arrests come about a year after Gawker documented the existence of Silk Road, an online narcotics storefront that was available only to TOR users. The site sold LSD, Afghani hashish, tar heroin and other controlled substances and allowed customers to pay using the virtual currency known as Bitcoin, the article reported. It wasn’t immediately clear what the relationship between Silk Road and Farmer’s Market is.

Farmer’s Market had thousands of registered users who hailed from every one of the states of the United States and the District of Columbia, as well as 34 other countries, according to prosecutors. The site relied on multiple sources of various controlled substances. The suppliers, operators, and customers communicated primarily through the website’s internal private messaging system.

In addition to the eight arrests, authorities arrested seven other people on Monday. In the course of the arrests, authorities seized hash, LSD, and MDMA, in addition to an indoor psychotripic mushroom grow and three indoor marijuana growing operations.

View the original article here

Oracle tells jury “you can’t just step on somebody’s intellectual property”


SAN FRANCISCO—Google’s Android operating system might be free, but it makes plenty of money off the system—and some of that cash ought to be headed to Oracle. At least that’s what the database company’s lawyer told a jury today. “You can’t just step on somebody’s intellectual property because you have a good business reason for it,” said Michael Jacobs, an Oracle lawyer.

One of the biggest tech-industry legal disputes has moved to trial now in San Francisco, where a panel of 12 men and women was sworn in to hear eight weeks of testimony about whether Google violated copyright and patent laws when it created its Android operating system. Jacobs told jurors that Google was so eager to see Android take off, it was willing to charge ahead without getting a license from Sun—even though top Google execs knew it needed one. (Java was created by Sun Microsystems, which was purchased by Oracle a few years ago.)

Google hasn’t had a chance to respond yet; its lawyers are scheduled to give an opening statement tomorrow morning.

This trial is the culmination of a case first filed almost two years ago. Over that time, it has morphed from a case mostly about patents to one that’s mainly about copyright. That’s in part because five of the seven patents Oracle originally asserted have been tossed out of the lawsuit. At one point, Oracle filed damage reports suggesting it would ask for up to $6 billion in damages; that’s been whittled down greatly. The sides still have conflicting damage reports, but numbers presented to the jury are likely to be in the tens of millions, not the billions.

“This isn’t the kind of property we’re used to,” Jacobs told the jury. “It’s intellectual property, which fuels our economy, and is the backstop for the R&D that great companies engage in.”

After that brief explication, Jacobs wasted no time in showing jurors an e-mail from Google engineer Tim Lindholm to Andy Rubin, the head of Android. That message has been the subject of contentious litigation already, and Google lawyers tried, unsuccessfully, to keep it out of court. It reads in part:

“What we’ve actually been asked to do (by Larry and Sergei) is to investigate what technical alternatives exist to java for android and chrome. we’ve been over a bunch of these, and think they all suck. We conclude that we need to negotiate a license for Java under the terms we need.”

It was the first of many e-mails Oracle presented that show Google knew it needed a license for Android, but just blew it off. “This was not a mistake, this was not inadvertence,” said Jacobs. “The decision to use Oracle intellectual property in Android was done at the highest levels of Google with consciousness and awareness of what’s going on.”

(Google has argued that the Lindholm e-mail is simply a strategic discussion of what to do, which was only initiated after Oracle filed suit.)

In the lawsuit, Oracle isn’t claiming that a license is needed to use the Java programming language, but it does say a license is needed for anyone using a Java application programming interface, or API. Google, meanwhile, maintains that neither the Java programming language nor the Java APIs are even subject to copyright.

Just because Google doesn’t charge for Android doesn’t mean it isn’t big business, said Jacobs. It makes its money the same way Google does from Web search—via advertising.

“So this is Google’s pitch: we don’t make money off of Android. We give it away for free to the world, and they put it on their cell phones and tablets,” said Jacobs. “But this is business. And in fact, Android is hugely profitable for Google.”

Google wanted to base its system on Java because it knew it needed an active developer community to design the apps that would make Android take off. Google believed that leveraging Java—with its community of six million developers worldwide—was the way to go.

At one point, Jacobs acknowledged that there’s precious little evidence of actual copying in the case. The allegation is that it’s the design of Java APIs that Google emulated. Still, he did show a few lines of code to the jury that Google is alleged to have copied “line for line” from Java code. “It’s not a lot, but copying is copying,” said Jacobs. Building Android “was not done in a clean room. It was not done without looking at Sun’s stuff.”

Only jurors who were prepared to sit for an eight-week trial even came to court; they were pre-screened with written questionnaires. Forty-four prospective jurors filed into court shortly after 8:00 am, but only about 20 of those were ultimately questioned. Judge William Alsup warned jurors not to look at any press coverage of the case, and not to talk about it with friends and family—standard orders for a jury, but more significant in a high-profile case like this one.

“You may not look at any website, blog, no TV item, or radio item,” Alsup said. “The case must be decided on the evidence at trial, not what some newspaper person is saying.”

The pool of jurors that was questioned included two computer engineers, each with more than 20 years of experience—one who worked for Cisco and another who worked for Hewlett-Packard. The HP engineer, when asked about her hobbies, said she enjoys creating smartphone apps in her free time. Both had involvement with their respective companies’ patent work, and the Cisco engineer said he was heavily involved in a patent lawsuit Cisco is currently defending.

During questioning by the judge, the Cisco engineer expressed skepticism about software patents, saying: “My opinion is that patent lawyers write those so vaguely it’s hard to argue them one way or another.” Ultimately, both engineers acknowledged they would have a hard time separating the evidence at trial from their own extensive work in the tech industry. They were dismissed by the judge.

Two other prospective jurors were attorneys; one an in-house insurance lawyer, and another was actually a patent lawyer who works in Silicon Valley (mostly dealing with biomedical technology). Both were struck from the final panel by Oracle and Google lawyers—each side was allowed to strike three jurors in total. (Oracle struck the insurance lawyer, while the patent lawyer was knocked out by Google.)

At one point, prospective jurors were asked what kind of cell phones they used. About half raised their hand to indicate they use smartphones, while the other half had feature phones. Only one of the jurors (who is on the final panel) uses an Android phone.

The jury as finally selected is seven women and five men with a range of backgrounds. The panel includes a retired photographer, a woman who works for The Gap, a secretary with the EPA, an SF Muni bus driver, a plumber, a financial adviser, and a letter carrier for the Postal Service. Because it’s a federal jury, the juries come from throughout the Northern District. This includes the entire San Francisco Bay Area and some counties further to the north.

View the original article here

FCC drops Google investigation over WiFi snooping, issues small fine


The FCC has dropped its investigation of Google’s collection of WiFi “payload data” as part of the company’s Street View project, but has slapped the company with a $25,000 fine for obstructing its investigation. The investigation sought to determine if Google had improperly collected and stored personal information from traffic over unsecured personal WiFi networks, including e-mail, text messages, and webpage requests. An investigation by the Federal Trade Commission was dropped in October of 2010, just as the FCC took up its own.

In a notice dated April 13, released in a partially redacted form (PDF) on April 15 by the FCC, the commission claimed, “For many months, Google deliberately impeded the (FCC Enforcement) Bureau’s investigation by failing to respond to requests for material information and to provide certifications and verifications of its responses.” In the notice, the FCC added that it had no further plans for enforcement action on the matter—in part because the Google engineer who developed the code used to collect and store WiFi data “invoked his Fifth Amendment rights and declined to testify.”

The FCC also said that it determined, lacking further information on the nature of the collection, that there was no precedent for applying the laws under which the investigation was launched—the Wiretap Act and the Communications Act—because the traffic intercepted by Google was not encrypted.

The New York Times reports that on Sunday, a Google spokesperson called the data collection “a mistake…but we believe we did nothing illegal.”

View the original article here

Share Open Access Worldwide: A Reflexive Documentary Coming Soon! – Describing Open Access Week in Croatia


SHOW 2011 (share/openaccess/worldwide) was the first-of-its-kind event to celebrate Open Access Week in Croatia, organized by InTech’s Katarina Lovrecic and Ana Nodilo at the Faculty of Humanities and Social Sciences.

We tried to catch a glimpse of the future age where sharing is done digitally, information can flow freely and we can decide to either build barriers to contain it or give open access to share it – worldwide. We have seen ideas floating among students and have preserved them in a jar. Now we want to open this jar, share it with you and start up a story.

Coming soon to your collection of videos with no rights reserved – a SHOW documentary will reflect on how students in Croatia were introduced to the Copyleft movement, Creative Commons licensing, Open Projects, Open Content movement, Open Access movement and the Right to Research Coalition action. Stay tuned, for we may offer you a New Year Surprise and include you in a conversation about a chance for the world of open values.

View the original article here

CFP for Roundtable on Data Management for Humanities Research at MLA 2013


MLA Call for Papers: Issues in Data Management for Humanities Research

 In 2006 the American Council for Learned Societies released a report titled Our Cultural Commonwealth summarizing the promises and challenges of “big data” within the humanities and social sciences. The radical growth of computing, networking, and digital storage promised (or at least prefaced) a new era of “cumulative, collaborative, and synergistic” scholarship. And as we’ve seen in the half-dozen years since the report was issued, much of this promise has been borne out. Examples include inter-institutional projects like those sponsored by the Digging into Data program (administered by the NEH’s Office of Digital Humanities); the Mellon-funded Project Bamboo (designed to become a content management and collaboration hub for IT and humanities researchers); and massive data collection undertakings like the Shoah Foundation’s Visual History Archive (a collection of nearly 52,000 testimonies from Holocaust and other genocide survivors).

Of course, most humanities research datasets don’t begin to approach this kind of scale. Single researchers and research teams working with local materials, locally created databases, and local storage are still very much the norm. The question that this roundtable talk focuses on, then, is: How do we define and support good humanities data practices at the individual and local level?

Presenters are encouraged to take a step back from “big” and ask how scholars, librarians, and technologists can help foster better local data collection, storage, and distribution in order to build research practices that promote multi-disciplinary and multi-institutional synergies from the ground up. By sharing local instances of data management, we hope to explore big data as a process of “building toward,” rather than a monumental or sui generis product.

Questions that might be addressed include:

What counts as humanities data? The term data is unsettling for many scholars in part because it connotes something definitive and unproblematic. Where humanities scholarship often thrives on complication and constructivism, data seeks repeatability and finality. Datasets are construed as a kind of incontestable bedrock which, to some, make them not only a little boring, but dangerously and deceptively boring. Is there a way for humanities researchers to have our constructivist cake and eat it, too? Can we, in other words, productively question the constructedness of datasets even as we assemble them? And can we expand the kinds of information that constitute data?Metadata and Occam’s Razor. When it comes to metadata, there are any number of fields to fill in, tags to apply, descriptors to append… and not all of them are useful. Or rather, it’s difficult to know what metadata will be useful to current and future researchers and, for this reason, difficult to know when and where to stop. What are best practices for metadata? Is there a standard (Dublin Core, say) that ought to be adhered to? What are the benefits and drawbacks of standardization? Can cloud-sourcing metadata offer solutions for humanities researchers to develop more comprehensive data markups to meet the needs of diverse users?Copyright, Fair Use, and Open Access. Compiling data is one thing. Being able to use it legally is another. Come discuss obstacles, strategies, and successes dealing with copyright and use issues.Grants and Funding. Have you successfully (or unsuccessfully) applied for grant funding that requires a data management / preservation strategy? We welcome conversations about how to articulate data management as a component of the grant application process. Funding is also an issue when it comes to supporting the programmers, designers, project managers, and copyright lawyers that may need to be part of a data management team. How do diverse institutions budget these costs? What experiences have you had seeking institutional in-kind support or funding for your own projects?

The roundtable will feature as many as eight presenters and is open to scholars, educators, and technologists from across the humanities. All presentation formats are welcome, but do let organizers know if you have specific technology needs.

Please send 250 word abstracts and a brief bio to kbjack@umich.edu or spencer.keralis@unt.net by Monday, March 19th.

 Keep in mind that all panelists will need to be registered MLA members (or have their membership waived) on or before April 7th.

View the original article here

‘International Open Access Day’ at CSIR-National Aerospace Laboratories (NAL), Bangalore, India


The ‘International Open Access Day’  was celebrated  at S R Valluri Auditorium in CSIR-NAL on October 24 2011 in commemoration with the ‘International Open Access Week’.  This global event in its 4th year is an opportunity for the academic and research community to continue to learn about the potential benefits of Open Access (OA), to share what they have learned from professional colleagues, and to help inspire in wider participation in helping to make OA a new norm in scholarship and research. Dr. Poornima Narayana, Head, ICAST welcomed the gathering and highlighted the significance of open access movement and initiatives being adopted around the world.  A ‘road show’ of video clipping depicting the advantages of OA was screened followed by a power point presentation on the current scenario of OA at international and national levels.  The OA mandate framed by the CSIR Core Committee and the guidelines / policies were highlighted. The Chief Guest of the day was Prof. P. Balaram, Director, IISc, the editor of the popular journal ‘Current Science’, an advocate of open access especially open archives.  The topic of his presentation was on ‘Science Publishing: Issues of Access’ wherein he expressed his concern over the science publishing, the publisher’s growing monopoly and in particular author’s rights.  He opined that open archives – one of the prominent OA channel is more preferable for promoting scholarly communication.  In his own words “much publicly-funded research work done in India and other developing countries appears in high impact factor journals. The key question is, how should the fruits of publicly-funded research be made available to readers in the developing world at no cost?  Since the question of who pays for open access journals is unresolved, scientists should go ahead and promote open archives.  His concern regarding the funding of publishing in OA journals was clearly evident when he remarked ‘OA – who will pay for the publishing – Authors or Readers??’  In Europe and the United States, the costs of publishing in open access journals are underwritten by grants from bodies such as the Wellcome Trust, the Howard Hughes Medical Institute, and the US National Institutes of Health. These all provide grants that are far larger than any seen by scientists in developing countries.” He appreciated the efforts of Librarians / Information Scientists for bridging the gap in providing the much needed information between the publishers and the information seekers. Professor  Balaram suggested that the various consortiums, forums and organizations in the country have to come together for negotiations with the publishers with clearly defined agreements towards post-termination and perpetual access and discuss at length these issues at national policy making level while coming out with strong legislation to have sustainable open access model.Dr. L. Venkatakrishnan, Head of Experimental Aerodynamics Division, CSIR-NAL was the second speaker of the day on the topic ‘Open Access: Promised Utopia Or Eventual Reality?’  He gave a brief background on publication channel with peer review, a lengthy process taking around  18-21 months.  His presentation touched upon the various channels/routes of OA: green, and gold for archival purposes.  He presented a clear picture of rising / escalating cost of the commercial journals as the actual reason for the evolution of Open Access while citing the different OA journal models: completely free and author-pays-model (PLOS).  In total agreement with Prof. Balaram’s statement of promoting OA through open archives (IR), he put a straight forward question “Faculty creating, editing and reviewing content – Are publishers required?”.  Citing the famous phrase “Chicken or Egg”, he pondered over the issue of ‘More Downloads -> More Citations’ Or More Citations -> More Downloads?’   Regarding the funding aspect especially in developing countries which Professor  Balaram had already raised in his talk, Dr. Venkatakrishnan mentioned that general reaction of OA as ‘Wider Access -> More Downloads -> More Readers -> More Citations and finally to more funding’.  He also provided some insight into the copyright policies of different publishers and statistics of authors citations/downloads in OA Context.Mr. Shyam Chetty, Director, CSIR-NAL in his remarks highlighted the advancement in OA and mentioned about the adoption of OA mandates/policies at CSIR.  He also mentioned that, CSIR will lead the OA movement within the country and take on board other scientific channels to form a ‘National Open Access Policy’ including legislation if necessary to mandate the availability of out put of publicly funded research in public domain in near future. He appreciated the role of CSIR-NAL in actively advocating and promoting the OA initiatives.  He congratulated Head, ICAST & team representing CSIR-NAL for being recently awarded the ‘Platinum Award’ for the pioneering contributions to the CSIR OA movement along with CSIR-NIO.  He further mentioned that the CSIR-NAL’s Institutional Repositories (IR) has been one of the top ranked open repositories in the world’s leading IRs.  He mentioned that CSIR-NAL has been identified as the nodal point for guiding/setting up of IRs of not only other CSIR labs but other institutions in the country.Mr B S Shivaram anchored the event while Mr. S R Dey proposed vote of thanks.

Poornima Narayana

View the original article here