FLOSS Project Planets

Acquia Developer Center Blog: Learn Drupal 8 for Free: Extending and Managing a Drupal 8 Site

Planet Drupal - Fri, 2016-12-02 13:57

If you, or a colleague, is wondering about why "community" is an important part of the Drupal experience, this course could be a good way to introduce the topic.

Because community is an underlying theme that runs through many of the 20+ segments that make up the course.

The section on Modules is a good example. Our instructor, Rod, explains the concept of "contributed modules," and illustrates how they work in practice using three specific modules: Book, Forums, and Telephone.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Drupal core announcements: Drupal 8 and 7 core release window on Wednesday, December 07, 2016

Planet Drupal - Fri, 2016-12-02 13:15
Start:  2016-12-06 12:00 - 2016-12-08 12:00 UTC Organizers:  stefan.r David_Rothstein Fabianx catch xjm cilefen Event type:  Online meeting (eg. IRC meeting)

The monthly core patch (bug fix) release window is this Wednesday, December 07. Drupal 8.2.4 and 7.53 will be released with fixes for Drupal 8 and 7.

To ensure a reliable release window for the patch release, there will be a Drupal 8.2.x commit freeze from 12:00 UTC Tuesday to 12:00 UTC Thursday. Now is a good time to update your development/staging servers to the latest 8.2.x-dev or 7.x-dev code and help us catch any regressions in advance. If you do find any regressions, please report them in the issue queue. Thanks!

To see all of the latest changes that will be included in the Drupal 8 release, see the 8.2.x commit log. The Drupal 7 release will only contain one fix for a drag-and-drop regression introduced in Drupal 7.51 (see also the 7.x commit log).

Other upcoming core release windows after this week include:

  • Wednesday, December 21 (security release window)
  • Wednesday, January 04 (patch release window)
  • Wednesday, April 5 (scheduled minor release)

Drupal 6 is end-of-life and will not receive further releases.

For more information on Drupal core release windows, see the documentation on release timing and security releases, as well as the Drupal core release cycle overview.

Categories: FLOSS Project Planets

Seventeen new GNU releases in November

FSF Blogs - Fri, 2016-12-02 12:38

For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.

To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the url https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

This month, we welcome Mathieu Lirzin as the new maintainer of GNU Freetalk.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.

As always, please feel free to write to us at maintainers@gnu.org with any GNUish questions or suggestions for future installments.

Categories: FLOSS Project Planets

Shirish Agarwal: Air Congestion and Politics

Planet Debian - Fri, 2016-12-02 10:20

Confession time first – I am not a frequent flyer at all. My first flight was in early late 2006. It was a 2 hour flight from Bombay (BOM) to Bengaluru (formerly Bangalore, BLG) . I still remember the trepidation, the nervousness and excitement the first time I took to air. I still remember the flight very vividly,

It was a typical humid day for Bombay/Mumbai and we (me and a friend) had gone to Sahar (the domestic airport) to take the flight in the evening. Before starting the sky had turned golden-orange and I was wondering how I would feel once I would be in air.We started at around 20:00 hours in the evening and as it was a clear night were able to see the Queen’s necklace (Marine Drive) in all her glory.

The photographs on the wikipedia page don’t really do justice to how beautiful the whole boulevard looks at night, especially how it looks from up there. While we were seeing, it seemed the pilot had actually banked at 45 degrees angle so we can have the best view of the necklace OR maybe the pilot wanted to take a photo OR ME being in overdrive (like Robin Williams, the Russian immigrant in Moscow on the Hudson experiences the first time he goes to the mall ;))

In either way, this would be an experience I would never forget till the rest of my life. I remember I didn’t move an inch (even to go the loo) as I didn’t want to let go of the whole experience. While I came back after 3-4 days, I still remember re-experiencing/re-imagining the flights for a whole month each time I went to sleep.

While I can’t say it has become routinised, but have been lucky to have the opportunity to fly domestic around the country primarily for work. After the initial romanticism wears off, you try and understand the various aspects of the flight which are happening around you.

These experiences are what lead to file/share today’s blog post. Yesterday, Ms. Mamata Banerjee, one of the leaders of the Opposition cried wolf because the Aircraft was circling the Airport. Because she is the Chief Minister she feels she should have got precedent or at least that seems to be the way the story unfolded on TV.

I have been about 15-20 times on flight in the last decade for work or leisure. Almost all the flights I have been, it has been routine that the flights fly around the Airport for 15-20 minutes before landing. This is ‘routine’. I have seen Airlines being stacked (remember the scene from Die Hard 2 where Holly Mclane, John Mclane’s wife looks at different aircraft at different altitudes from her window seat) this is what an Airport has to do when it doesn’t have enough runaways. In fact just read few days back MIAL is going for an emergency expansion as they weren’t expecting as many passengers as they did this year as well as last. In fact the same day there was a near-miss between two aircraft in Mumbai airport itself. Because of Ms. Mamata’s belligerence, this story didn’t even get a mention in the TV mainstream media.

The point I wanna underscore is that this is a fact of life and not just in India, world-over it seems hubs are being busier than ever, for instance Heathrow has been also a busy bee and they will to rework air operations as per a recent article .

In India, Kolkata is also one of the busier airports . If anything, I hope it teaches her the issues that plague most Indian airports and she works with the Government in Center so the Airport can expand more. They just got a new terminal three years back.

It is for these issues that the Indian Government has come with the ‘Regional Connectivity Scheme‘ .

Lastly, a bit of welcome news to people thinking to visit India, the Govt. of the day is facilitating easier visa norms to increase tourism and trade to India. Hope this is beneficial to all and any Debian Developers who wanna come visit India

Categories: FLOSS Project Planets

Amazee Labs: Drupal Mountain Camp is coming

Planet Drupal - Fri, 2016-12-02 09:58
Drupal Mountain Camp is coming

Together with the local Drupal Community, we are inviting you to join us for Drupal Mountain Camp in Davos, Switzerland. More than 200 attendees are expected to come for sessions, workshops, sprints and stay for the community ... as well as a great amount of powder for those interested in skiing or snowboarding under perfect conditions!

Josef Dabernig Fri, 12/02/2016 - 15:58

After a very successful and very interesting Drupal Commerce Camp in 2011, the team of Drupal Events Schweiz decided that it is again time for a Drupal Camp in Switzerland. As Switzerland provides so much more than bright attendees and speakers, we also want to show the beauty of our country and mountains. We found the perfect location for this: Davos!

The camp will happen from 16 to 19 February 2017 at the Davos Congress Centre. We expect around 200 attendees from Switzerland, all over Europe and the world. We will feature a day of summits, two days of sessions, a day fully dedicated to sprints, and social activities each day.

I'm especially excited that Preston So has been confirmed to be the first keynote speaker. He will be giving a talk on "API-first Drupal and the future of the CMS". In addition, we have confirmed a number of speakers internationally & from Switzerland. Interested in presenting? The call for sessions is open until beginning of January.

Sprints are a great place to get involved with development of Drupal 8, join an initiative and get to work with experts and people interested in the same areas. See the sprint sheet to sign up already to join forces for improving Media, Paragraphs, Drupal 8 core as well as the Rules module for Drupal 8.

We are thankful for a great number of sponsors already which help keep ticket prices low. If you are interested in finding Drupal talent or providing your services to Swiss customers, this is a unique opportunity. See the Drupal Mountain Camp website for information about sponsoring or contact Michael directly. 

Discounted hotel options are available from CHF 59 per person/night via the following link: http://www.davoscongress.ch/DrupalMountainCamp

Early Bird Tickets are available until end of December for only CHF 80. With your purchase you already get a discount on travels with the famous Swiss railway service. There is more to come!

See you 16-19 of February in Davos, Switzerland. In the meantime, follow us on twitter.

Categories: FLOSS Project Planets

Snapping KDE Applications

Planet KDE - Fri, 2016-12-02 09:44

This is largely based on a presentation I gave a couple of weeks ago. If you are too lazy to read, go watch it instead

For 20 years KDE has been building free software for the world. As part of this endeavor, we created a collection of libraries to assist in high-quality C++ software development as well as building highly integrated graphic applications on any operating system. We call them the KDE Frameworks.

With the recent advance of software bundling systems such as Snapcraft and Flatpak, KDE software maintainers are however a bit on the spot. As our software is building on such a vast collection of frameworks and supporting technology, the individual size of a distributable application can be quite abysmal.

When we tried to package our calculator KCalc as a snap bundle, we found that even a relatively simple application like this, makes for a good 70 MiB snap to be in a working state (most of this is the graphical stack required by our underlying C++ framework, Qt).
Since then a lot of effort was put into devising a system that would allow us to more efficiently deal with this. We now have a reasonably suitable solution on the table.

The KDE Frameworks 5 content snap.

A content snap is a special bundle meant to be mounted into other bundles for the purpose of sharing its content. This allows us to share a common core of libraries and other content across all applications, making the individual applications just as big as they need to be. KCalc is only 312 KiB without translations.

The best thing is that beside some boilerplate definitions, the snapcraft.yaml file defining how to snap the application is like a regular snapcraft file.

Let’s look at how this works by example of KAlgebra, a calculator and mathematical function plotter:

Any snapcraft.yaml has some global attributes we’ll want to set for the snap

name: kalgebra version: 16.08.2 summary: ((TBD)) description: ((TBD)) confinement: strict grade: devel

We’ll want to define an application as well. This essentially allows snapd to expose and invoke our application properly. For the purpose of content sharing we will use a special start wrapper called kf5-launch that allows us to use the content shared Qt and KDE Frameworks. Except for the actual application/binary name this is fairly boilerplate stuff you can use for pretty much all KDE applications.

apps: kalgebra: command: kf5-launch kalgebra plugs: - kde-frameworks-5-plug # content share itself - home # give us a dir in the user home - x11 # we run with xcb Qt platform for now - opengl # Qt/QML uses opengl - network # gethotnewstuff needs network IO - network-bind # gethotnewstuff needs network IO - unity7 # notifications - pulseaudio # sound notifications

To access the KDE Frameworks 5 content share we’ll then want to define a plug our application can use to access the content. This is always the same for all applications.

plugs: kde-frameworks-5-plug: interface: content content: kde-frameworks-5-all default-provider: kde-frameworks-5 target: kf5

Once we got all that out of the way we can move on to actually defining the parts that make up our snap. For the most part parts are build instructions for the application and its dependencies. With content shares there are two boilerplate parts you want to define.

The development tarball is essentially a fully built kde frameworks tree including development headers and cmake configs. The tarball is packed by the same tech that builds the actual content share, so this allows you to build against the correct versions of the latest share.

kde-frameworks-5-dev: plugin: dump snap: [-*] source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz

The environment rigging provide the kf5-launch script we previously saw in the application’s definition, we’ll use it to execute the application within a suitable environment. It also gives us the directory for the content share mount point.

kde-frameworks-5-env: plugin: dump snap: [kf5-launch, kf5] source: http://github.com/apachelogger/kf5-snap-env.git

Lastly, we’ll need the actual application part, which simply instructs that it will need the dev part to be staged first and then builds the tarball with boilerplate cmake config flags.

kalgebra: after: [kde-frameworks-5-dev] plugin: cmake source: http://download.kde.org/stable/applications/16.08.2/src/kalgebra-16.08.2.tar.xz configflags: - "-DKDE_INSTALL_USE_QT_SYS_PATHS=ON" - "-DCMAKE_INSTALL_PREFIX=/usr" - "-DCMAKE_BUILD_TYPE=Release" - "-DENABLE_TESTING=OFF" - "-DBUILD_TESTING=OFF" - "-DKDE_SKIP_TEST_SETTINGS=ON"

Putting it all together we get a fairly standard snapcraft.yaml with some additional boilerplate definitions to wire it up with the content share. Please note that the content share is using KDE neon’s Qt and KDE Frameworks builds, so, if you want to try this and need additional build-packages or stage-packages to build a part you’ll want to make sure that KDE neon’s User Edition archive is present in the build environments sources.list deb http://archive.neon.kde.org/user xenial main. This is going to get a more accessible centralized solution for all of KDE soon™.

name: kalgebra version: 16.08.2 summary: ((TBD)) description: ((TBD)) confinement: strict grade: devel apps: kalgebra: command: kf5-launch kalgebra plugs: - kde-frameworks-5-plug # content share itself - home # give us a dir in the user home - x11 # we run with xcb Qt platform for now - opengl # Qt/QML uses opengl - network # gethotnewstuff needs network IO - network-bind # gethotnewstuff needs network IO - unity7 # notifications - pulseaudio # sound notifications plugs: kde-frameworks-5-plug: interface: content content: kde-frameworks-5-all default-provider: kde-frameworks-5 target: kf5 parts: kde-frameworks-5-dev: plugin: dump snap: [-*] source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz kde-frameworks-5-env: plugin: dump snap: [kf5-launch, kf5] source: http://github.com/apachelogger/kf5-snap-env.git kalgebra: after: [kde-frameworks-5-dev] plugin: cmake source: http://download.kde.org/stable/applications/16.08.2/src/kalgebra-16.08.2.tar.xz configflags: - "-DKDE_INSTALL_USE_QT_SYS_PATHS=ON" - "-DCMAKE_INSTALL_PREFIX=/usr" - "-DCMAKE_BUILD_TYPE=Release" - "-DENABLE_TESTING=OFF" - "-DBUILD_TESTING=OFF" - "-DKDE_SKIP_TEST_SETTINGS=ON"

Now to install this we’ll need the content snap itself. Here is the content snap. To install it a command like sudo snap install --force-dangerous kde-frameworks-5_*_amd64.snap should get you going. Once that is done one can install the kalgebra snap. If you are a KDE developer and want to publish your snap on the store get in touch with me so we can get you set up.

The kde-frameworks-5 content snap is also available in the edge channel of the Ubuntu store. You can try the games kblocks and ktuberling like so:

sudo snap install --edge kde-frameworks-5 sudo snap install --edge --devmode kblocks sudo snap install --edge --devmode ktuberling

If you want to be part of making the world a better place, or would like a KDE-themed postcard, please consider donating a penny or two to KDE

Categories: FLOSS Project Planets

Ixis.co.uk - Thoughts: The wonders of Twig theming

Planet Drupal - Fri, 2016-12-02 07:47

Part of the plan for rebuilding the Ixis site in Drupal 8 was for us to write up some of our thoughts at the end. Writing about the theme layer is hard, because it’s all completely new. There’s a million things I want to talk about, some of the Drupal related, some of them just about frontend tools in general, so I’m going to try and squeeze as much as I can in here.

Bootstrap is awesome

The frontend theme is built on Bootstrap, which allowed us to get the site up and running quickly, and iterate on feedback from the rest of Ixis and the design agency. We only started building the theme 6 days before the site went live!

Using a fairly heavy framework like Bootstrap often raises concerns about performance, but considering that all of the CSS on one of our page loads is ~30kb, it was worth the trade off for the speed of development and iteration. At some point we’ll go through Bootstrap and remove the parts we aren’t using, but right now we’re still iterating and improving things.

Libsass is awesome

We’ve been using Sass for a while at Ixis, it’s amazing for writing clear CSS that can actually be maintained after a few years of iterative development work. Up until now we’ve relied on Compass to compile that for us, but this time we took a look at Gulp and libsass with node-sass.

Damn is it fast. We’re compiling bootstrap-sass as part of our theme, which used to take Compass ages every time we changed a variable. Libsass builds the whole thing in about a second. On top of compiling the CSS, we’re using Gulp on the Ixis site to automatically add vendor prefixes (no more reliance on compass for browser compatibility), provide image mappings (which lets Sass access information about the images, like dimensions) and optimise the file size of those images.

Twig is awesome

I <3 Twig. After so many years of wrangling the Drupal PHPTemplate engine into usable markup, it is so refreshing that everything is templated. No more overriding theme functions just to add an extra class to a div. You don’t even need to use PHP at all to do it.

Dealing with render arrays in a template? Just print them! Doesn’t matter what’s in them. Let Twig sort it out. You’ll never again see “Array” printed to the screen because you forgot to pass something through render().

I know a huge amount of effort went into making Drupal 8 more accessible to frontend folks, and it really does seem to have paid off! The only downside is that I still have to go back to the Drupal 7 way of PHP everywhere occasionally to support older sites.

Libraries are awesome

The new libraries.yml file makes it a lot easier to define libraries, which are collections of Javascript and CSS, along with their dependencies, so you can just load things when you need them. No gallery on this page? Drupal won’t load that javascript, and if the gallery was the only reason you needed jQuery then it won’t load that either if no gallery is being rendered on the page. A contrib module that adds a library can now be boiled down to just an info.yml and libraries.yml filebe 2 yaml files in the theme.

Contrib for libraries is in a bit of a weird state in Drupal 8 at the moment. If you’ve used Drupal 7 then you’ve probably used the Libraries API module, it’s there to allow other contrib modules to share third party libraries. It looks like the plan for Drupal 8 is to eventually have a centralised repository of third-party libraries, but currently it doesn’t seem like a lot of contrib is using it, instead just relying on the library being in /libraries in the Drupal root.

Paragraphs are awesome

We went with paragraphs in order to allow content editors a bit of control of the layout of the pages. I won’t waffle too much about how we set up paragraphs because we’ve already talked about that, but from a frontend point of view, each paragraph type has it’s own twig template, and we can load separate libraries just for that one paragraph, so we were able to make each paragraph into it’s own self contained component. Did I mention I love the new Twig stuff in Drupal 8?

Caching is awesome, but you should probably learn how it works

The new caching layer is amazing, it just seems to work magically behind the scenes. It can be quite easy to be caught out by it though, if you don’t understand what’s happening behind the curtain, especially if you’re used to Drupal 7’s way of caching each page.

Here’s an example from building the Ixis site: The logo on our site links to the front page. It’s a fairly common thing to do. If you’re already on the frontpage though, that’s a redundant link, there’s no reason for it to be there and it can confuse things for those using screen readers.

So we added a simple check: If we’re on the front page, just show the logo, otherwise wrap the logo in a link to the front page. Without caching, this works fine. With caching, Drupal caches that block the first time it’s rendered, then uses it everywhere, because we haven’t told Drupal that this block can differ based on path.

In the end, we added a new cache context to the ‘site branding’ block, so Drupal knows it can differ based on the url. We’re currently relying on just the ‘url.path’ context, but in 8.3 there’s a new url.path.is_front context we’ll be using.

Debugging is easy.

Debugging Twig is easy peasy; In your sites/default/services.yml file (copy the one from default.services.yml if it doesn’t exist), then change the debug value to ‘true’.

parameters:  twig.config:    debug: true

Then you get handy comments like this in the page source:

                 

You can quickly dump a variable with the dump function like {{ dump(a_variable) }}, which just uses PHP’s var_dump() behind the scenes, but if you want to poke at array they you’ll probably want to use the kint module from devel, which gives you a much nicer output with {{ kint(content) }}. Word of warning, the little + will expand everything, and if it’s a big tree it’ll just crash your browser.

Frontend developer experience in Drupal 8 is a huge improvement over what was in Drupal 7, and thanks to the new release cycle, it’s continuing to improve even after Drupal 8 has launched. Really looking forward to seeing what new features we’ll get in the future, and I’ll be keeping an eye on the ‘core ideas’ issue queue.

Categories: FLOSS Project Planets

Unimity Solutions Drupal Blog: Global Opportunities with Drupal - Enterprise Adoption

Planet Drupal - Fri, 2016-12-02 07:26

This is again an excerpt from my talk at #DCD2016. The second part of my talk was on Drupal Enterprise Adoption.

Categories: FLOSS Project Planets

Andrew Dalke: Weininger's Realization

Planet Python - Fri, 2016-12-02 07:00

Dave Weininger passed away recently. He was very well known in the chemical informatics community because of his contribution to the field and his personality. Dave and Yosi Taitz founded Daylight Chemical Information Systems to turn some of these ideas into a business, back in the 1980s. It was very profitable. (As a bit of trivia, the "Day" in "Daylight" comes from "Dave And Yosi".)

Some of the key ideas that Dave and Daylight introduced are SMILES, SMARTS, and fingerprints (both the name and the hash-based approach). Together these made for a new way to handle chemical information search, and do so in significantly less memory. The key realization which I think lead to the business success of the comany, is that the cost of memory was decreasing faster than the creation of chemical information. This trend, combined with the memory savings of SMILES and fingerprints, made it possible to store a corporate dataset in RAM, and do chemical searches about 10,000 times faster than the previous generation of hard-disk based tools, and do it before any competition could. I call this "Weininger's Realization". As a result, the Daylight Thor and Merlin databases, along with the chemistry toolkits, became part of the core infrastructure of many pharmaceutical companies.

I don't know if there was a specific "a-ha" moment when that realization occurred. It certainly wasn't what drove Dave to work on those ideas in the first place. He was a revolutionary, a Prometheus who wanted to take chemical information from what he derisively called 'the high priests' and bring it to the people.

An interest of mine in the last few years is to understand more about the history of chemical information. The best way I know to describe the impact of Dave and Daylight is to take some of the concepts back to the roots.

You may also be interested in reading Anthony Nicholls description of some of the ways that Dave influenced him, and Derek Lowe's appreciation of SMILES.

Errors and Omissions

Before I get there, I want to emphasize that the success of Daylight cannot be attributed to just Dave, or Dave and Yosi. Dave's brother Art and his father Joseph were coauthors on the SMILES canonicalization paper. The company hired people to help with the development, both as employees and consultants. I don't know the details of who did what, so I will say "Dave and Daylight" and hopefully reduce the all too easy tendency to give all the credit on the most visible and charismatic person.

I'm unfortunately going to omit many parts of the Daylight technologies, like SMIRKS, where I don't know enough about the topic or its effect on cheminformatics. I'll also omit other important but invisible aspects of Daylight, like documentation or the work Craig James did to make the database servers more robust to system failures. Unfortunately, it's the jockeys and horses which attract the limelight, not those who muck the stables or shoe the horses.

Also, I wrote this essay mostly from what I have in my head and from presentations I've given, which means I've almost certainly made mistakes that could be fixed by going to my notes and primary sources. Over time I hope to spot and fix those mistakes in this essay. Please let me know of anything you want me to change or improve.

Dyson and Wiswesser notations

SMILES is a "line notation", that is, a molecular representation which can be described as a line of text. Many people reading this may have only a vague idea of the history of line notations. Without that history, it's hard to understand what helped make SMILES successful.

The original line notations were developed in the 1800s. By the late 1800s chemists began to systematize the language into what is now called the IUPAC nomenclature. For example, caffeine is "1,3,7-trimethylpurine-2,6-dione". The basics of this system are taught in high school chemistry class. It takes years of specialized training to learn how to generate the correct name for complex structures.

Chemical nomenclature helps chemists index the world's information about chemical structures. In short, if you can assign a unique name to a chemical structure (a "canonical" name), then it you can use standard library science techniques to find information about the structure.

The IUPAC nomenclature was developed when books and index cards were the best way to organize data. Punched card machines brought a new way of thinking about line notations. In 1946, G. Malcolm Dyson proposed a new line notation meant for punched cards. The Dyson notion was developed as a way to mechanize the process of organizing and publishing a chemical structure index. It became a formal IUPAC notation in 1960, but was already on its last legs and dead within a few years. While it might have been useful for mechanical punched card machines, it wasn't easily repurposed for the computer needs of the 1960s. For one, it depended on superscripts and subscripts, and used characters which didn't exist on the IBM punched cards.

William J. Wiswesser in 1949 proposed the Wiswesser Line Notation, universally called WLN, which could be represented in EBCIDIC and (later) ASCII in a single line of text. More importantly, unlike the Dyson notation, which follows the IUPAC nomenclature tradition of starting with the longest carbon chain, WLN focuses on functional groups, and encodes many functional groups directly as symbols.

Chemists tend to be more interested in functional groups, and want to search based on those groups. For many types of searches, WLN acts as its own screen, that is, it's possible to do some types of substructure search directly on the symbols of the WLN, without having to convert the name into a molecular structure for a full substructure search. To search for structures containing a single sulfur, look for WLNs with a single occurrence of S, but not VS or US or SU. The chemical information scientists of the 1960s and 1970s developed several hundred such clever pattern searches to make effective use of the relatively limited hardware of that era.

WLNs started to disappear in the early 1980s, before SMILES came on the scene. Wendy Warr summarized the advantages and disadvantages of WLNs in 1982. She wrote "The principle disadvantage of WLN is that it is not user friendly. This can only be overcome by programs which will derive a canonical WLN from something else (but no one has yet produced a cost-effective program to do this for over 90% of compounds), by writing programs to generate canonical connection tables from noncanonical WLNs, or by accepting the intervention of a skilled "middle man"."

Dyson/IUPAC and WLNs were just two of dozens, if not hundreds, of proposed line notations. Nearly every proposal suffered from a fatal flaw - they could not easily be automated on a computer. Most required postgraduate-level knowledge of chemistry, and were error-prone. The more rigorous proposals evaluated the number of mistakes made during data entry.

One of the few exceptions are the "parentheses free" notations from a pair of papers from 1964, one by Hiz and the other by Eisman, in the same issue of the Journal of Chemical Documentation. In modern eyes, they very much like SMILES but represented in a postfix notation. Indeed, the Eisman paper gives a very SMILES-like notation for a tree structure, "H(N(C(HC(CIC(N(HH)C(HN(IH)))I)H)H))" and a less SMILES-like notation for a cyclic structure, before describing how to convert them into a postfix form.

I consider the parentheses-free nomenclatures a precursor to SMILES, but they were not influential to the larger field of chemical information. I find this a bit odd, and part of my research has been to try and figure out why. It's not like it had no influence. A version of this notation was in the Chemical Information and Data System (CIDS) project in the 1960s and early 1970s. In 1965, "CIDS was the first system to demonstrate online [that is, not batch processing] searching of chemical structures", and CIDS wasn't completely unknown in the chemical information field.

But most of the field in the 1970s went for WLN for a line notation, or a canonical connection table.

SMILES

Dave did not know about the parentheses free line notations when he started work on SMILES, but he drew from similar roots in linguistics. Dave was influenced by Chomsky's writings on linguistics. Hiz, mentioned earlier, was at the Department of Linguistics at the University of Pennsylvania, and that's also where Eugene Garfield did his PhD work on the linguistics of chemical nomenclature.

Dave's interest in chemical representations started when he was a kid. His father, Joseph Weininger, was a chemist at G.E., with several patents to his hame. He would draw pictures of chemical compounds for Dave, and Dave would, at a non-chemical level, describe how they were put together. These seeds grew into what became SMILES.

SMILES as we know it started when Dave was working for the EPA in Duluth. They needed to develop a database of environmental compounds, to be entered by untrained staff. (For the full story of this, read Ed Regis's book "The Info Mesa: Science, Business, and New Age Alchemy on the Santa Fe Plateau.") As I recall, SMILES was going to the the internal language, with a GUI for data entry, but it turns out that SMILES was easy enough for untrained data entry people to write it directly.

And it's simple. I've taught the basics of SMILES to non-chemist programmers in a matter of minutes, while WLN, Dyson, and InChI, as example of other line notations, are much more difficult to generate either by hand or by machine. Granted, those three notations have canonicalization rules built into them, which is part of the difficulty. Still, I asked Dave why something like SMILES didn't appear earlier, given that the underlying concepts existed in the literature by then.

He said he believes it's because the generation of people before him didn't grow up with the software development background. I think he's right. When I go to a non-chemist programer, I say "it's a spanning tree with special rules to connect the cycles", and they understand. But that vocabulary was still new in the 1960s, and very specialized.

There's also some conservatism in how people work. Dyson defended the Dyson/IUPAC notation, saying that it was better than WLN because it was based on the longest carbon chain principle that chemists were already familiar with, even though the underlying reasons for that choice were becoming less important because of computer search. People know what works, and it's hard to change that mindset when new techniques become available.

Why the name SMILES? Dave Cosgrove told me it was because when Dave woud tell people he had a new line notations, most would "groan, or worse. When he demonstrated it to them, that would rapidly turn to a broad smile as they realised how simple and powerful it is."

Exchange vs. Canonical SMILES

I not infrequently come across people who say that SMILES is a proprietary format. I disagree. I think the reason for the disagreement is that two different concepts go under the name "SMILES". SMILES is an exchange language for chemistry, and it's an identifier for chemical database search. Only the second is proprietary.

Dave wanted SMILES as a way for chemists from around the world and through time to communicate. SMILES describes a certain molecular valence model view of the chemistry. This does not need to be canonical, because you can always do that yourself once you have the information. I can specify hydrogen cyanide as "C#N", "N#C", or "[H][C]#[N]" and you will be able to know what I am talking about, without needing to consult some large IUPAC standard.

He wanted people to use SMILES that way, without restriction. The first SMILES paper describes the grammar. Later work at Daylight in the 1990s extended SMILES to handle isotopes and stereochemistry. (This was originally called "isomeric SMILES", but it's the SMILES that people think of when they want a SMILES.) Daylight published the updated grammar on their website. It was later included as part of Gasteiger's "Handbook of Chemoinformatics: From Data to Knowledge in 4 Volumes". Dave also helped people at other companies develop their own SMILES parsers.

To say that SMILES as an exchange format is proprietary is opposite to what Dave wanted and what Daylight did.

What is proprietary is the canonicalization algorithm. The second SMILES paper describes the CANGEN algorithm, although it is incomplete and doesn't actually work. Nor does it handle stereochemistry, which was added years later. Even internally at Daylight, it took many years to work out all of the bugs in the implementation.

There's a good financial reason to keep the algorithm proprietary. People were willing to pay a lot of money for a good, fast chemical database, and the canonicalization algorithm was a key part of Daylight's Thor and Merlin database servers. In business speak, this is part of Daylight's "secret sauce".

On the other hand, there's little reason for why that algorithm should be published. Abstractly speaking, it would mean that different tools would generate the same canonical SMILES, so a federated data search would reduce to a text search, rather than require a re-canonicalization. This is one of the goals of the InChI project, but they discovered that Google didn't index the long InChI strings in a chemically useful way. They created the InChI key as a solution. SMILES has the same problem and would need a similar solution.

Noel O'Boyle published a paper pointed out that the InChI canonicalization assignment could be used to assign the atom ordering for a SMILES string. This would give a universal SMILES that anyone could implement. There's been very little uptake of that idea, which gives a feel of how little demand there is.

Sometimes people also include about the governance model to decide if something is proprietary or not, or point to the copyright restrictions on the specification. I don't agree with these interpretations, and would gladly talk about them at a conference meeting if you're interested.

Line notations and connection tables

There are decades of debate on the advantages of line notations over connection tables, or vice versa. In short, connection tables are easy to understand and parse into an internal molecule data structure, while line notations are usually more compact and can be printed on a single line. And in either case, at some point you need to turn the text representation into a data structure and treat it as a chemical compound rather than a string.

Line notations are a sort of intellectual challenge. This alone seems to justify some of the many papers proposing a new line notation. By comparison, Open Babel alone supports over 160 connection table formats, and there are untold more in-house or internal formats. Very few of these formats have ever been published, except perhaps in an appendix in a manual.

Programmers like simple formats because they are easy to parse, often easy to parse quickly, and easy to maintain. Going back the Warr quote earlier, it's hard to parse WLN efficiently.

On the other hand, line notations fit better with text-oriented systems. Back in the 1960s and 1970s, ISI (the Institute for Scientific Information) indexed a large subset of the chemistry literature and distributed the WLNs as paper publications, in a permuted table to help chemists search the publication by hand. ISI was a big proponent of WLN. And of course it was easy to put a WLN on a punched card and search it mechanically, without an expensive computer,

Even now, a lot of people use Excel or Spotfire to display their tabular data. It's very convenient to store the SMILES as a "word" in a text cell.

Line notations also tend to be smaller than connection tables. As an example, the connection table lines from the PubChem SD files (excluding the tag data) average about 4K per record. The PUBCHEM_OPENEYE_ISO_SMILES tag values average about 60 bytes in length.

Don't take the factor of 70 as being all that meaningful. The molfile format is not particularly compact, PubChem includes a bunch of "0" entries which could be omitted, and the molfile stores the coordinates, which the SMILES does not. The CAS search system in the late 1980s used about 256 bytes for each compact connection table, which is still 4x larger than the equivalent SMILES.

Dave is right. SMILES, unlike most earlier line notations, really is built with computer parsing in mind. Its context-free grammar is easy to parse using simple stack, thought still not as easy as a connection table. It doesn't require much in the way of lookup tables or state information. There's also a pretty natural mapping from the SMILES to the corresponding topology.

What happens if you had a really fast SMILES parser? As a thought experiment which doesn't reflect real hardware, suppose you could convert 60 bytes of SMILES string to a molecule data structure faster than you could read the additional 400 bytes of connection table data. (Let's say the 10 GHz CPU is connected to the data through a 2400 baud modem.) Then clearly it's best to use a SMILES, even if it takes longer to process.

A goal for the Daylight toolkit was to make SMILES parsing so fast that there was no reason to store structures in a special internal binary format or data structure. Instead, when it's needed, parse the SMILES into a molecule object, use the molecule, and throw it away.

On the topic of muck and horseshoes, as I recall Daylight hired an outside company at one point to go through the code and optimize it for performance.

SMARTS

SMARTS is a natural recasting and extension of the SMILES grammar to define a related grammar for substructure search.

I started in chemical information in the late 1990s, with the Daylight toolkit and a background which included university courses on computer grammars like regular expressions. The analogy of SMARTS to SMILES is like regular expressions to strings seems obvious, and I modeled my PyDaylight API on the equivalent Python regular expression API.

Only years later did I start to get interested in the history of chemical information, though I've only gotten up to the early 1970s so there's a big gap that I'm missing. Clearly there were molecular query representations before SMARTS. What I haven't found is a query line notation, much less one implemented in multiple systems. This is a topic I need to research more.

The term "fingerprint"

Fingerprints are a core part of modern cheminformatics. I was therefore surprised to discover that Daylight introduced the term "fingerprint" to the field, around 1990 or so.

The concept existed before then. Adamson and Bush did some of the initial work in using fingerprint similarity as a proxy for molecular similarity in 1973, and Willett and Winterman's 1986 papers [1, 2] (the latter also with Bawden) reevaluated the earlier work and informed the world of the effectiveness of the Tanimoto. (We call it "Tanimoto" instead of "Jaccard" precisely because of those Sheffield papers.)

But up until the early 1990s, published papers referred to fingerprints as the "screening set" or "bit screens", which describes the source of the fingerprint data, and didn't reifying them into an independent concept. The very first papers which used "fingerprint" were by Yvonne Martin, an early Daylight user at Abbott, and John Barnard, who uses "fingerprint", in quotes, in reference specifically to Daylight technology.

I spent a while trying to figure out the etymology of the term. I asked Dave about it, but it isn't the sort of topic which interests him, and he didn't remember. "Fingerprint" already existed in chemistry, for IR spectra, and the methods for matching spectra are somewhat similar to those of cheminformatics fingerprint similarity, but not enough for me to be happy with the connection. Early in his career Dave wrote software for a mass spectrometer, so I'm also not rejecting the possibility.

The term "fingerprint" was also used in cryptographic hash functions, like "Fingerprinting by Random Polynomial" by Rabin (1981). However, these fingerprints can only be used to test if two fingerprints are identical. They are specifically designed to make it hard to use the fingerprints to test for similarity of the source data.

I've also found many papers talking about image fingerprints or audio fingerprints which can be used for both identity and similarity testing; so-called "perceptual hashes". However, their use of "fingerprint" seems to have started a good decade after Daylight popularized it in cheminformatics.

Hash fingerprints

Daylight needed a new name for fingerprints because they used a new approach to screening.

Fingerprint-like molecular descriptions go back to at least the Frear code of the 1940s. Nearly all of the work in the intervening 45 years was focused on finding fragments, or fragment patterns, which would improve substructure screens.

Screen selection was driven almost entirely by economics. Screens are cheap, with data storage as the primary cost. Atom-by-atom matching, on the other hand, had a very expensive CPU cost. The more effective the screens, the better the screenout, the lower the overall cost for an exact substructure search.

The best screen would have around 50% selection/rejection on each bit, with no correlation between the bits. If that could exist, then an effective screen for 1 million structures would need only 20 bits. This doesn't exist, because few fragments meet that criteria. The Sheffield group in the early 1970s (who often quoted Mooers as the source of the observation) looked instead at more generic fragment descriptions, rather than specific patterns. This approach was further refined by BASIC in Basel and then at CAS to become the CAS Online screens. This is likely the pinnacle of the 1970s screen development.

Even then, it had about 4000 patterns assigned to 2048 bits. (Multiple rare fragments can be assigned to the same bit with little reduction in selectivity.)

A problem with a fragment dictionary is that it can't take advantage of unknown fragments. Suppose your fragment dictionary is optimized for pharmaceutical compounds, then someone does a search for plutonium. If there isn't an existing fragment definition like "transuranic element" or "unusual atom", then the screen will not be able to reject any structures. Instead, it will slowly go through the entire data set only to return no matches.

This specific problem is well known, and the reason for the "OTHER" bit of the MACCS keys. However, other types of queries may still have an excessive number of false positives during screening.

Daylight's new approach was the enumeration-based hash fingerprint. Enumerate all subgraphs of a certain size and type (traditionally all linear paths with up to 7 atoms), choose a canonical order, and use the atom and bond types in order t generate a hash value. Use this value to seed a pseudo random number generator, then generate a few values to set bits in the fingerprint; the specific number depends on the size of the subgraph. (The details of the hash function and the number of bits to set were also part of Daylight's "secret sauce.")

Information theory was not new in chemical information. Calvin Mooers developed superimposed coding back in the 1940s in part to improve the information density of chemical information on punched cards (and later complained about how computer scientists rediscovered it as hash tables). The Sheffield group also used information theory to guide their understanding of the screen selection problem. Feldman and Hodes in the 1970s developed screens by the full enumeration of common subgraphs of the target set and a variant of Mooers' superimposed coding.

But Daylight was able to combine information theory and computer science theory (i.e., hash tables) to develop a fingerprint generation technique which was completely new. And I do mean completely new.

Remember how I mentioned there are SMILES-like line notations in the literature, even if people never really used them? I've looked hard, and only with a large stretch of optimism can I find anything like the Daylight fingerprints before Daylight, and mostly as handwaving proposal for what might be possible. Nowadays, almost every fingerprint is based on a variation of the hash approach, rather than a fragment dictionary.

In addition, because of the higher information density, Daylight fingerprints were effective as both a substructure screen and a similarity fingerprint using only 1024 bits, instead of the 2048 bits of the CAS Online screen. This will be important in the next section.

Chemical data vs. compute power and storage size

CAS had 4 million chemical records in 1968. The only cost-effective way to store that data was on tape. Companies could and did order data tapes from CAS for use on their own corporate computers.

Software is designed for the hardware, so the early systems were built first for tape drives and then, as they became affordable, for the random-access capabilities of hard disks. A substructure search would first check against the screens to reject obvious mismatches, then for each of the remaining candidates, read the corresponding record off disk and do the full atom-by-atom match.

Apparently Derek Price came up with the "law of exponential increase" in "Science Since Babylon" (1961), which describes how science information has exponential growth. I've only heard about that second hand. Chemical data is no exception, and its exponential growth was noted, I believe, in the 1950s by Perry.

In their 1971 text book, Lynch, et al. observed the doubling period was about 12 years. I've recalculated that number over a longer baseline, and it still holds. CAS has 4 million structures in 1968 and 100 million structure in 2015, which is a doubling every 10-11 years.

On the other hand, computers have gotten more powerful at a faster rate. For decades the effective computing power doubled every 2 years, and the amount of RAM and data storage for constant dollars has doubled even faster than that.

In retrospect it's clear that at some point it would be possible to store all of the world's chemistry data, or at least a corporate database, in memory.

Weininger's Realization

Disks are slow. Remember how the atom-by-atom search needed to pull a record off the disk? That means the computer needs to move the disk arm to the right spot and wait for the data to come by, while the CPU simply waits. If the data were in RAM then it would be 10,000x faster to fetch a randomly chosen record.

Putting it all in RAM sounds like the right solution, but in in the early 1980s memory was something like $2,000 per MB while hard disk space was about $300 per MB. One million compounds with 256 bytes per connection table and 256 bytes per screen requires almost 500MB of space. No wonder people kept the data on disk.

By 1990, RAM was about $80/MB while hard disk storage was $4/MB, while the amount of chemistry data had only doubled.

Dave, or at least someone at Daylight, must have realized that the two different exponential growth rates make for a game changer, and that the Daylight approach would give them a head start over the established vendors. This is explicit in the Daylight documentation for Merlin:

The basic idea behind Merlin is that data in a computer's main memory can be manipulated roughly five orders of magnitude faster than data on its disk. Throughout the history of computers, there has been a price-capacity-speed tradeoff for data storage: Large-capacity storage (tapes, drums, disks, CD-ROMS) is affordable but slow; high-speed storage ("core", RAM) is expensive but fast. Until recently, high-speed memory was so costly that even a modest amount of chemical information had to be stored on tapes or disks.

But technology has a way of overwhelming problems like this. The amount of chemical information is growing at an alarming rate, but the size of computer memories is growing even faster: at an exponential rate. In the mid-1980's it became possible for a moderately large minicomputer to fit a chemical database of several tens of thousands of structures into its memory. By the early 1990's, a desktop "workstation" could be purchased that could hold all of the known chemicals in the world (ca. 15 million structures) in its memory, along with a bit of information about each.

On the surface, in-memory operations seem like a straightforward good deal: A computer's memory is typically 105 times faster than its disk, so everything you could do on disk is 100000 times faster when you do it in memory. But these simple numbers, while impressive, don't capture the real differences between disk- and memory-based searches:

  • With disk-based systems, you formulate a search carefully, because it can take minutes to days to get your answer back. With Merlin it is usually much faster to get the answer than it is to think up the question. This has a profound effect on user's attitudes towards the EDA system.
  • In disk-based systems, you typically approach with a specific question, often a question of enough significance that you are willing to invest significant effort to find the answer. With Merlin, it is possible to "explore" the database in "real-time" - to poke around and see what is there. Searches are so fast that users adopt a whole new approach to exploratory data analysis.

Scaling down

I pointed out earlier that SMILES and fingerprints take up less space. I estimate it was 1/3 the space of what CAS needed, which is the only comparison I've been able to figure out. That let Daylight scale up to larger data set for a given price, but also scale down to smaller hardware.

Let's say you had 250,000 structures in the early 1990s. With the Daylight system you would need just under 128 MB of RAM, which meant you could buy a Sun 3, which maxed out at 128 MB, instead of a more expensive computer.

It still requires a lot of RAM, and that's where Yosi comes in. His background was in hardware sales, and he know how to get a computer with a lot of RAM in it. Once the system was ready, Dave and his brother Art put it in the back of a van and went around the country to potential customers to give a demo, often to much astonishment that it could be so fast.

I think the price of RAM was the most important hardware factor to the success of Daylight, but it's not the only one. When I presented some of these ideas at Novartis in 2015, Bernhard Rohde correctly pointed out that decreasing price of hardware also meant that computers were no longer big purchase items bought and managed by IT, but something that even individual researchers could buy. That's another aspect of scaling down.

While Daylight did sell to corporate IT, their heart was in providing tools and especially toolkits to the programmer-chemists who would further develop solutions for their company.

Success and competitors

By about 1990, Daylight was a market success. I have no real idea how much profit the company made, but it was enough that Dave bought his own planes, including a fighter jet. When I was at the Daylight Summer School in 1998, the students over dinner came up with a minimum of $15 million in income, and maximum of $2 million in expenses.

It was also a scientific success, measured by the number of people talking about SMILES and fingerprints in the literature.

I am not a market analyst so I can't give that context. I'm more of a scholar. I've been reading through the old issues of JCICS (now titled JCIM) trying to identify the breakthrough transition for Daylight. There is no bright line, but there is tantalizing between-the-lines.

In 1992 or so (I'll have to track it down), there's a review of a database vendors' product. The reviewer mentions that the vendor plans to have an in-memory database the next year. I can't help but interpret it as a competitor responding to the the new Daylight system, and having to deal with customers who now understood Weininger's Realization.

Dave is a revolutionary

The decreasing price of RAM and hardware may help explain the Daylight's market success, but Dave wasn't driven by trying to be the next big company. You can see that in how the company acted. Before the Oracle cartridge, they catered more towards the programmer-chemist. They sold VMS and then Unix database servers and toolkits, with somewhat primitive database clients written using the XView widget toolkit for X. I remember Dave once saying that the clients were meant as examples of what users could do, rather than as complete applications. A different sort of company would have developed Windows clients and servers, more tools for non-programmer chemists, and focused more on selling enterprise solutions to IT and middle managers.

A different sort of company would have tried to be the next MDL. Dave didn't think they were having fun at MDL, so why would he want to be like them?

Dave was driven by the idea of bringing chemical information away from the "high priests" who held the secret knowledge of how to make things work. Look at SMILES - Dyson and WLN required extensive training, while SMILES could be taught to non-chemists in an hour or so. Look at fingerprints - the CAS Online screens were the result of years of research in picking out just the right fragments, based on close analysis of the types of queries people do, while the hash fingerprints can be implemented in a day. Look even at the Daylight depictions, which were well known as being ugly. But Dave like to point out that the code, at least originally, needed only 4K. That's the sort of code a non-priest could understand, and the sort of justification a revolutionary could appreciate.

Dave is a successful revolutionary, which is rare. SMILES, SMARTS and fingerprints are part of the way we think about modern cheminformatics. Innumerable tools implement them, or variations of them.

High priests of chemical information

Revolutionary zeal is powerful. I remember hearing Dave's "high priests" talk back in 1998 and the feeling empowered, that yes, even as a new person in the field, cheminformatics is something I can take on on my own.

As I learn more about the history of the field, I've also learned that Dave's view is not that uncommon. In the post-war era the new field of information retrieval wanted to challenge the high priests of library science. (Unfortunately I can't find that reference now.)

Michael Lynch would have been the high priest of chemical information if there ever was one. Yet at ICCS 1988 he comments "I can recollect very little sense, in 1973, that this revolution was imminent. Georges Anderla .. noted that the future impact of very large scale integration (VLSI) was evident only to a very few at that time, so that he quite properly based his projections on the characteristics of the mainframe and minicomputer types then extant. As a result, he noted, he quite failed to see, first, that the PC would result in expertise becoming vastly more widely disseminated, with power passing out of the hands of the small priesthood of computer experts, thus tapping a huge reservoir of innovative thinking, and, second, that the workstation, rather than the dumb terminal, would become standard."

A few years I talked with Egon Willighagen. He is one of the CDK developers and an advocate of free and open source software for chemical information. He also used the metaphor of talking information from the high priests to the people, but in his case he meant the previous generation of closed commercial tools, like the Daylight toolkit.

Indeed, one way to think of it is that Dave the firebrand of Daylight became it high priest of Daylight, and only the priests of Daylight control the secret knowledge of fingerprint generation and canonicalization.

That's why I no longer like the metaphor. Lynch and the Sheffield group published many books and papers, including multiple textbooks on how to work with chemical information. Dave and Daylight did a lot of work to disseminate the Daylight way of thinking about cheminformatics. These are not high priests hoarding occult knowledge, but humans trying to do the right thing in an imperfect world.

There's also danger in the metaphor. Firebrand revolutionaries doesn't tend to know history. Perhaps some of the temple should be saved? At the very least there might be bad feelings if you declare your ideas revolutionary only to find out that not only they are not new, and you are talking to the previous revolutionary who proposed them.

John Barnard told me a story of Dave and Lynch meeting at ICCS in 1988, I believe. Dave explained how his fingerprint algorithm worked. Lynch commented something like "so it's like Calvin Mooers' superimposed coding"? Lynch knew his history, and he was correct - fingerprints and superimposed coding are related, though not the same. Dave did not know the history or how to answer the question.

My view has its own danger. With 75 years of chemical information history, one might feel a paralysis of not doing something out of worry that it's been done before and you just don't know about it.

Leaving Daylight

In the early 2000s Dave became less interested in chemical information. He had grown increasingly disenchanted with the ethics of how pharmaceutical companies work and do research. Those who swear allegience to money would have no problems just making a sale and profits, but he was more interested in ideas and people. He had plenty of money.

He was concerned about the ethics of how people used his work, though I don't know how big a role that was overall. Early on, when Daylight was still located at Pomona, they got purchase request from the US military. He didn't want the Daylight tools to be used to make chemical weapons, and put off fulfilling the sale. The military group that wanted the software contacted him and pointed out that they were actually going to use the tools to develop defenses against chemical weapons, which helped him change his mind.

I think the Daylight Oracle cartridge marked the big transition for Daylight. The 1990s was the end of custom domain-specific databases like Thor and Merlin and the rise of object-relational databases with domain-specific extensions. Norah MacCuish (then Shemetulskis) helped show how the Daylight functionality could work as an Informix DataBlade. Oracle then developed their data cartridge, and most of the industry, including the corporate IT that used the Daylight servers, followed suit.

Most of Daylight's income came from the databases, not the toolkit. Companies would rather use widely-used technology because it's easier to find people with the skills to use it, and there can be better integration with other tools. If Daylight didn't switch, then companies would turn to competitive products which, while perhaps not as elegant, were a better fit for IT needs. Daylight did switch, and the Daylight user group meetings (known as "MUG" because it started off as the Medchem User Group) started being filled by DBAs and IT support, not the geek programmer-chemist that were interested in linguistics and information theory and the other topics that excited Dave.

It didn't help that the Daylight tools were somewhat insular and static. The toolkit didn't have an officially supported way to import SD file, though there was a user-contributed package for that, which Daylight staff did maintain. Ragu Bharadwaj developed Daylight's JavaGrins chemical sketcher, which was a first step towards are more GUI-centered Daylight system, but also the only step. The RDKit, as an example of a modern chemical informatics toolkit, includes algorithms Murko scaffolds, matched molecular pairs, and maximum common substructure, and continues to get new ones. But Daylight didn't go that route either.

What I think it comes down to is that Dave was really interested in databases and compound collections. Daylight was also a reseller of commercial chemical databases, and Dave enjoyed looking through the different sets. He told me there was a different feel to the chemistry done in different countries, and he could spot that by looking at the data. He was less interested in the other parts of cheminformatics, and as the importance of old-school "chemical information" diminished in the newly renamed field of "chem(o)informatics", so too did his interest, as did the interest from Daylight customers.

Post-Daylight

Dave left chemical information and switched to other topics. He tried to make theobromine-free chocolate, for reasons I don't really understand, though I see now that many people buy carob as a chocolate alternative because it's theobromine free and thus stimulant free. He was also interested in binaural recording and hearing in general. He bought the house next door to turn it into a binaural recoding studio.

He became very big into solar power, as a way to help improve the world. He bought 12 power panels, which he called The Twelve Muses, from a German company and installed them for his houses. These were sun trackers, to maximize the power. Now, Santa Fe is at 2,300 m/7,000 ft. elevation, in a dry, near-desert environment. He managed to overload the transformers because they produced a lot more power than the German-made manufacturer expected. Once that was fixed, both houses were solar powered, plus he had several electric cars and motorcycles, and could feed power back into the grid for profit. He provided a recharging station for the homebrew electric car community who wanted to drive from, say, Albuquerque to Los Alamos. (This was years before Teslas were on the market). Because he had his own DC power source, he was also able to provide a balanced power system to his recording studio and minimize the power hum noise.

He tried to persuade the state of New Mexico to invest more in solar power. It makes sense, and he showed that it was possible. But while he was still a revolutionary, he was not a politician, and wasn't able to make the change he wanted.

When I last saw him in late 2015, he was making a cookbook of New Mexico desserts.

Dave will always have a place in my heart.

Andrew Dalke
Trollhättan, Sweden
2 December 2016

Changes

  • 2016-12-06: Added a section on leaving Daylight, fixed typos, and added Dave Cosgrove's account about the origin of the term "SMILES".

Categories: FLOSS Project Planets

Rapha&#235;l Hertzog: My Free Software Activities in November 2016

Planet Debian - Fri, 2016-12-02 06:45

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

In the 11 hours of (paid) work I had to do, I managed to release DLA-716-1 aka tiff 4.0.2-6+deb7u8 fixing CVE-2016-9273, CVE-2016-9297 and CVE-2016-9532. It looks like this package is currently getting new CVE every month.

Then I spent quite some time to review all the entries in dla-needed.txt. I wanted to get rid of some misleading/no longer applicable comments and at the same time help Olaf who was doing LTS frontdesk work for the first time. I ended up tagging quite a few issues as no-dsa (meaning that we will do nothing for them as they are not serious enough) such as those affecting dwarfutils, dokuwiki, irssi. I dropped libass since the open CVE is disputed and was triaged as unimportant. While doing this, I fixed a bug in the bin/review-update-needed script that we use to identify entries that have not made any progress lately.

Then I claimed libgc and and released DLA-721-1 aka libgc 1:7.1-9.1+deb7u1 fixing CVE-2016-9427. The patch was large and had to be manually backported as it was not applying cleanly.

The last thing I did was to test a new imagemagick and review the update prepared by Roberto.

pkg-security work

The pkg-security team is continuing its good work: I sponsored patator to get rid of a useless dependency on pycryptopp which was going to be removed from testing due to #841581. After looking at that bug, it turns out the bug was fixed in libcrypto++ 5.6.4-3 and I thus closed it.

I sponsored many uploads: polenum, acccheck, sucrack (minor updates), bbqsql (new package imported from Kali). A bit later I fixed some issues in the bbsql package that had been rejected from NEW.

I managed a few RC bugs related to the openssl 1.1 transition: I adopted sslsniff in the team and fixed #828557 by build-depending on libssl1.0-dev after having opened the proper upstream ticket. I did the same for ncrack and #844303 (upstream ticket here). Someone else took care of samdump2 but I still adopted the package in the pkg-security team as it is a security relevant package. I also made an NMU for axel and #829452 (it’s not pkg-security related but we still use it in Kali).

Misc Debian work

Django. I participated in the discussion about a change letting Django count the number of developers that use it. Such a change has privacy implications and the discussion sparked quite some interest both in Debian mailing lists and up to LWN.

On a more technical level, I uploaded version 1.8.16-1~bpo8+1 to jessie-backports (security release) and I fixed RC bug #844139 by backporting two upstream commits. This led to the 1.10.3-2 upload. I ensured that this was fixed in the 1.10.x upstream branch too.

dpkg and merged /usr. While reading debian-devel, I discovered dpkg bug #843073 that was threatening the merged-/usr feature. Since the bug was in code that I wrote a few years ago, and since Guillem was not interested in fixing it, I spent an hour to craft a relatively clean patch that Guillem could apply. Unfortunately, Guillem did not yet manage to pull out a new dpkg release with the patches applied. Hopefully it won’t be too long until this happens.

Debian Live. I closed #844332 which was a request to remove live-build from Debian. While it was marked as orphaned, I was always keeping an eye on it and have been pushing small fixes to git. This time I decided to officially adopt the package within the debian-live team and work a bit more on it. I reviewed all pending patches in the BTS and pushed many changes to git. I still have some pending changes to finish to prettify the Grub menu but I plan to upload a new version really soon now.

Misc bugs filed. I filed two upstream tickets on uwsgi to help fix currently open RC bugs on the package. I filed #844583 on sbuild to support arbitrary version suffix for binary rebuild (binNMU). And I filed #845741 on xserver-xorg-video-qxl to get it fixed for the xorg 1.19 transition.

Zim. While trying to fix #834405 and update the required dependencies, I discovered that I had to update pygtkspellcheck first. Unfortunately, its package maintainer was MIA (missing in action) so I adopted it first as part of the python-modules team.

Distro Tracker. I fixed a small bug that resulted in an ugly traceback when we got queries with a non-ASCII HTTP_REFERER.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: FLOSS Project Planets

PyCharm: “What’s New in PyCharm 2016.3” Webinar Recording Available Now!

Planet Python - Fri, 2016-12-02 05:00

Our webinar “What’s New in PyCharm 2016.3” that we organized on Wednesday November 30th is now available on YouTube:

In this webinar Paul Everitt, PyCharm Developer Advocate, shows you around the new features in PyCharm 2016.3 (and shows some other great features that were introduced in 2016).

He discusses improvements to Django (at 1:33), web development support (at 6:07), database support (at 14:45), and more. PyCharm Professional bundles all the new functionality of the latest versions of WebStorm and DataGrip.

After introducing some of these features, he shows how to create a 2D game in Python using the Python Arcade library (from 17:26 onwards). In the process he shows how to unit test the game, profile the game, and how to debug the game faster.

If you’d like to follow along and play with the code, you can get it from Paul’s GitHub repository.

Keep up with the latest PyCharm news on this blog, and follow us on Twitter (@PyCharm)

The Drive to Develop

-PyCharm Team

Categories: FLOSS Project Planets

Matthew Garrett: Ubuntu still isn't free software

Planet Debian - Fri, 2016-12-02 04:37
Mark Shuttleworth just blogged about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.

The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu.

This remains incompatible with the principles of free software. The freedom to take someone else's work and redistribute it is a vital part of the four freedoms. It's legitimate for Canonical to insist that you not pass it off as their work when doing so, but their IP policy continues to insist that you remove all references to Canonical's trademarks even if their use would not infringe trademark law.

If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.

Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.

[1] And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise

comments
Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: Drupal Blogs in November

Planet Drupal - Fri, 2016-12-02 02:58
We have a news for you. Pretty exciting one. From now on, at the beginning of every month, we will look at the Drupal blogs we have written over the past month, making sure that nothing slips away from you and that you will be as informed as possible. Maybe you would have liked some of the topics, but you were just not on your computer that day, you had a day off, you were too busy at work etc. Well, from now on, even if you have missed something, you will be able to catch it later. Drupal Camps in Africa were our first blog topic this month. We got the inspiration for starting the world… READ MORE
Categories: FLOSS Project Planets

Kushal Das: Atomic Working Group update from this week's meeting

Planet Python - Fri, 2016-12-02 01:09

Two days back we had a very productive meeting in the Fedora Atomic Working Group. This post is a summary of the meeting. You can find all the open issues of the working group in this Pagure repo. There were 14 people present at the meeting, which happens on every Wednesday 5PM UTC at the #fedora-meeting-1 channel on Freenode IRC server.

Fedora 26 change proposal ideas discussion

This topic was the first point of discussion in the meeting. Walters informed that he will continue working on the Openshift related items, mostly the installer, system containers etc, and also the rpm-ostree. I have started a thread on the mailing list about the change idea, we also decided to create a wiki page to capture all the ideas.

During the Fedora 25 release cycle, we marked couple of Autocloud tests as non-gating as they were present in the system for some time (we added the actual tests after we found the real issue). Now with Fedora 25 out, we decided to reopen the ticket and mark those tests back as gating tests. Means in future if they fail, Fedora 25 Atomic release will be blocked.

The suggestion of creating the rkt base image as release artifact in Fedora 26 cycle, brought out some interesting discussion about the working group. Dusty Mabe mentioned to fix the current issues first, and then only jump into the new world. If this means we will support rkt in the Atomic Host or not, was the other concern. My reaction was maybe not, as to decide that we need to first coordinate between many other teams. rkt is packaged in Fedora, you can just install it in a normal system by the following command.

$ sudo dnf install rkt -y

But this does not mean we will be able to add support for rkt building into OSBS, and Adam Miller reminded us that will take a major development effort. It is also not in the road map of the release-infrastructure team. My proposal is to have only the base image build officially for rkt, and then let our users consume that image. I will be digging more on this as suggested by Dusty, and report back to the working group.

Next, the discussion moved towards a number of technical debts the working group is carrying. One of the major issue (just before F25 release) was about missing Atomic builds, but we managed to fix the issue on time. Jason Brooks commented that this release went much more promptly, and we are making a progress in that :) Few other points from the discussion were

  • Whether the Server working group agreed to maintain the Cloud Base image?
  • Having ancient k8s is a problem.
  • We will start having official Fedora containers very soon.
Documentation

Then the discussion moved to documentation, the biggest pain point of the working group in my mind. For any project, documentation can define if it will become a success or not. Users will move on unless we can provide clearly written instructions (which actually works). For the Atomic Working Group, the major problem is not enough writers. After the discussion in the last Cloud FAD in June, we managed to dig through old wiki pages. Trishna Guha is helping us to move them under this repo. The docs are staying live on https://fedoracloud.rtfd.io. I have sent out another reminder about this effort to the mailing list. If you think of any example which can help others, please write it down, and send in a pull request. It is perfectly okay if you publish it in any other document format. We will help you out to convert it into the correct format.

You can read the full meeting log here.

Categories: FLOSS Project Planets

Drupal Modules: The One Percent: Drupal Modules: The One Percent — Tota11y (video tutorial)

Planet Drupal - Thu, 2016-12-01 21:25
Drupal Modules: The One Percent — Tota11y (video tutorial) NonProfit Thu, 12/01/2016 - 20:25 Episode 8

Here is where we look at Drupal modules running on less than 1% of reporting sites. Today we investigate Tota11y which helps you visualize how your site performs when using assistive technologies. More info on Blue Beanie Day can be found at bluebeanieday.tumblr.com.

Categories: FLOSS Project Planets

Wesley Chun: Replacing text &amp; images with the Google Slides API with Python

Planet Python - Thu, 2016-12-01 20:33
NOTE: The code covered in this post are also available in a video walkthrough however the code here differs slightly, featuring some minor improvements to the code in the video.

IntroductionOne of the critical things developers have not been able to do previously was access Google Slides presentations programmatically. To address this "shortfall," the Slides team pre-announced their first API a few months ago at Google I/O 2016—also see full announcement video (40+ mins). In early November, the G Suite product team officially launched the API, finally giving all developers access to build or edit Slides presentations from their applications.

In this post, I'll walk through a simple example featuring an existing Slides presentation template with a single slide. On this slide are placeholders for a presentation name and company logo, as illustrated below:

One of the obvious use cases that will come to mind is to take a presentation template replete with "variables" and placeholders, and auto-generate decks from the same source but created with different data for different customers. For example, here's what a "completed" slide would look like after the proxies have been replaced with "real data:"

Using the Google Slides APIWe need to edit/write into a Google Slides presentation, meaning the read-write scope from all Slides API scopes below:
  • 'https://www.googleapis.com/auth/presentations' — Read-write access to Slides and Slides presentation properties
  • 'https://www.googleapis.com/auth/presentations.readonly' — View-only access to Slides presentations and properties
  • 'https://www.googleapis.com/auth/drive' — Full access to users' files on Google Drive
Why is the Google Drive API scope listed above? Well, think of it this way: APIs like the Google Sheets and Slides APIs were created to perform spreadsheet and presentation operations. However, importing/exporting, copying, and sharing are all file-based operations, thus where the Drive API fits in. If you need a review of its scopes, check out the Drive auth scopes page in the docs. Copying a file requires the full Drive API scope, hence why it's listed above. If you're not going to copy any files and only performing actions with the Slides API, you can of course leave it out.

Since we've fully covered the authorization boilerplate fully in earlier posts and videos, we're going to skip that here and jump right to the action.

Getting startedWhat are we doing in today's code sample? We start with a slide template file that has "variables" or placeholders for a title and an image. The application code will go then replace these proxies with the actual desired text and image, with the goal being that this scaffolding will allow you to automatically generate multiple slide decks but "tweaked" with "real" data that gets substituted into each slide deck.

The title slide template file is TMPFILE, and the image we're using as the company logo is the Google Slides product icon whose filename is stored as the IMG_FILE variable in my Google Drive. Be sure to use your own image and template files! These definitions plus the scopes to be used in this script are defined like this:
IMG_FILE = 'google-slides.png' # use your own!
TMPLFILE = 'title slide template' # use your own!
SCOPES = (
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/presentations',
)Skipping past most of the OAuth2 boilerplate, let's move ahead to creating the API service endpoints. The Drive API name is (of course) 'drive', currently on 'v3', while the Slides API is 'slides' and 'v1' in the following call to create a signed HTTP client that's shared with a pair of calls to the apiclient.discovery.build() function to create the API service endpoints:
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
SLIDES = discovery.build('slides', 'v1', http=HTTP)
Copy template fileThe first step of the "real" app is to find and copy the template file TMPLFILE. To do this, we'll use DRIVE.files().list() to query for the file, then grab the first match found. Then we'll use DRIVE.files().copy() to copy the file and name it 'Google Slides API template DEMO':
rsp = DRIVE.files().list(q="name='%s'" % TMPLFILE).execute().get('files')[0]
DATA = {'name': 'Google Slides API template DEMO'}
print('** Copying template %r as %r' % (rsp['name'], DATA['name']))
DECK_ID = DRIVE.files().copy(body=DATA, fileId=rsp['id']).execute().get('id')
Find image placeholderNext, we'll ask the Slides API to get the data on the first (and only) slide in the deck. Specifically, we want the dimensions of the image placeholder. Later on, we will use those properties when replacing it with the company logo, so that it will be automatically resized and centered into the same spot as the image placeholder.
The SLIDES.presentations().get() method is used to read the presentation metadata. Returned is a payload consisting of everything in the presentation, the masters, layouts, and of course, the slides themselves. We only care about the slides, so we get that from the payload. And since there's only one slide, we grab it at index 0. Once we have the slide, we're loop through all of the elements on that page and stop when we find the rectangle (image placeholder):
print('** Get slide objects, search for image placeholder')
slide = SLIDES.presentations().get(presentationId=DECK_ID
).execute().get('slides')[0]
obj = None
for obj in slide['pageElements']:
if obj['shape']['shapeType'] == 'RECTANGLE':
break
Find image fileAt this point, the obj variable points to that rectangle. What are we going to replace it with? The company logo, which we now query for using the Drive API:
print('** Searching for icon file')
rsp = DRIVE.files().list(q="name='%s'" % IMG_FILE).execute().get('files')[0]
print(' - Found image %r' % rsp['name'])
img_url = '%s&access_token=%s' % (
DRIVE.files().get_media(fileId=rsp['id']).uri, creds.access_token) The query code is similar to when we searched for the template file earlier. The trickiest thing about this snippet is that we need a full URL that points directly to the company logo. We use the DRIVE.files().get_media() method to create that request but don't execute it. Instead, we dig inside the request object itself and grab the file's URI and merge it with the current access token so what we're left with is a valid URL that the Slides API can use to read the image file and create it in the presentation.

Replace text and imageBack to the Slides API for the final steps: replace the title (text variable) with the desired text, add the company logo with the same size and transform as the image placeholder, and delete the image placeholder as it's no longer needed:
print('** Replacing placeholder text and icon')
reqs = [
{'replaceAllText': {
'containsText': {'text': '{{NAME}}'},
'replaceText': 'Hello World!'
}},
{'createImage': {
'url': img_url,
'elementProperties': {
'pageObjectId': slide['objectId'],
'size': obj['size'],
'transform': obj['transform'],
}
}},
{'deleteObject': {'objectId': obj['objectId']}},
]
SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=DECK_ID).execute()
print('DONE')
Once all the requests have been created, send them to the Slides API then let the user know everything is done.

ConclusionThat's the entire script, just under 60 lines of code. If you watched the video, you may notice a few minor differences in the code. One is use of the fields parameter in the Slides API calls. They represent the use of field masks, which is a separate topic on its own. As you're learning the API now, it may cause unnecessary confusion, so it's okay to disregard them for now. The other difference is an improvement in the replaceAllText request—the old way in the video is now deprecated, so go with what we've replaced it with in this post.

If your template slide deck and image is in your Google Drive, and you've modified the filenames and run the script, you should get output that looks something like this:
$ python3 slides_template.py
** Copying template 'title slide template' as 'Google Slides API template DEMO'
** Get slide objects, search for image placeholder
** Searching for icon file
- Found image 'google-slides.png'
** Replacing placeholder text and icon
DONE
Below is the entire script for your convenience which runs on both Python 2 and Python 3 (unmodified!). If I were to divide the script into major sections, they would be:
  • Get creds & build API service endpoints
  • Copy template file
  • Get image placeholder size & transform (for replacement image later)
  • Get secure URL for company logo
  • Build and send Slides API requests to...
    • Replace slide title variable with "Hello World!"
    • Create image with secure URL using placeholder size & transform
    • Delete image placeholder
Here's the complete script—by using, copying, and/or modifying this code or any other piece of source from this blog, you implicitly agree to its Apache2 license:
from __future__ import print_function

from apiclient import discovery
from httplib2 import Http
from oauth2client import file, client, tools

IMG_FILE = 'google-slides.png' # use your own!
TMPLFILE = 'title slide template' # use your own!
SCOPES = (
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/presentations',
)
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
creds = tools.run_flow(flow, store)
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
SLIDES = discovery.build('slides', 'v1', http=HTTP)

rsp = DRIVE.files().list(q="name='%s'" % TMPLFILE).execute().get('files')[0]
DATA = {'name': 'Google Slides API template DEMO'}
print('** Copying template %r as %r' % (rsp['name'], DATA['name']))
DECK_ID = DRIVE.files().copy(body=DATA, fileId=rsp['id']).execute().get('id')

print('** Get slide objects, search for image placeholder')
slide = SLIDES.presentations().get(presentationId=DECK_ID,
fields='slides').execute().get('slides')[0]
obj = None
for obj in slide['pageElements']:
if obj['shape']['shapeType'] == 'RECTANGLE':
break

print('** Searching for icon file')
rsp = DRIVE.files().list(q="name='%s'" % IMG_FILE).execute().get('files')[0]
print(' - Found image %r' % rsp['name'])
img_url = '%s&access_token=%s' % (
DRIVE.files().get_media(fileId=rsp['id']).uri, creds.access_token)

print('** Replacing placeholder text and icon')
reqs = [
{'replaceAllText': {
'containsText': {'text': '{{NAME}}'},
'replaceText': 'Hello World!'
}},
{'createImage': {
'url': img_url,
'elementProperties': {
'pageObjectId': slide['objectId'],
'size': obj['size'],
'transform': obj['transform'],
}
}},
{'deleteObject': {'objectId': obj['objectId']}},
]
SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=DECK_ID).execute()
print('DONE')
As with our other code samples, you can now customize it to learn more about the API, integrate into other apps for your own needs, for a mobile frontend, sysadmin script, or a server-side backend!

Code challengeAdd more slides and/or text variables and modify the script replace them too. EXTRA CREDIT: Change the image-based image placeholder to a text-based image placeholder, say a textbox with the text, "{{COMPANY_LOGO}}" and use the replaceAllShapesWithImage request to perform the image replacement. By making this one change, your code should be simplified from the image-based image replacement solution we used in this post.
Categories: FLOSS Project Planets

Django Weblog: Django bugfix release issued: 1.10.4, 1.9.12, 1.8.17

Planet Python - Thu, 2016-12-01 18:53

Today we've issued the 1.10.4, 1.9.12, and 1.8.17 bugfix releases.

The release package and checksums are available from our downloads page, as well as from the Python Package Index. The PGP key ID used for this release is Tim Graham: 1E8ABDC773EDE252.

Categories: FLOSS Project Planets

Wiki, what’s going on? (Part 18-Making it real)

Planet KDE - Thu, 2016-12-01 17:15

 

WikiToLearn1.0 action plan is getting real

 

Release the new version and start working to improve it : done.

Ok, done! Now let’s start talking about it, spam it, find new users and grow more and more!

Yes, more or less this is the work we are doing in these weeks with our team.

Unimib is funding posters, which we are using to start a new promotional campaign for WikToLearn! Promo team is working on these new info-graphics and you are going to love them. Unimib students, stay tuned and get ready to spot our posters all around you.

We are working hard also from both institutional and more informal contacts: new collaborators are coming. The team is organizing and taking part to new events in the incoming future; stay tuned, more people are going to talk about us and you’ll appreciate our efforts! We are also planning a series of new talks to present the new release and to get more and more people involved in our project.

We are also working on agreements with different universities and institutional centers such as GARR, Imperial College and UCL.

Christmas is coming, if you have ideas to celebrate it with our community contact us! WikiToLearn1.0 is going to celebrate its first XMas

C’mon, new year with the new WikiToLearn is coming: the moment is now!

Share your knoledge, share freedom!

 

L'articolo Wiki, what’s going on? (Part 18-Making it real) sembra essere il primo su Blogs from WikiToLearn.

Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in November 2016

Planet Debian - Thu, 2016-12-01 16:33

FTP assistant

This month I marked 377 packages for accept and rejected 36 packages. I also sent 13 emails to maintainers asking questions.

Debian LTS

This was my twenty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 11h. During that time I did uploads of

  • [DLA 696-1] bind9 security update for one CVE
  • [DLA 711-1] curl security update for nine CVEs

The upload of curl started as an embargoed one but the discussion about one fix took some time and the upload was a bit delayed.

I also prepared a test package for jasper which takes care of nine CVEs and is available here. If you are interested in jasper, please download it and check whether everything is working in your environment. As upstream only takes care of CVEs/bugs at the moment, maybe we should not upload the old version with patches but the new version with all fixes. Any comments?

Other stuff

As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-).

In November I also uploaded new versions of libmatthew-java, node-array-find-index, node-ejs, node-querystringify, node-require-dir, node-setimmediate, libkeepalive,
Further I added node-json5, node-emojis-list, node-big.js, node-eslint-plugin-flowtype to the NEW queue, sponsored an upload of node-lodash, adopted gnupg-pkcs11-scd, reverted the -fPIC-patch in libctl and fixed RC bugs in alljoyn-core-1504, alljoyn-core-1509, alljoyn-core-1604.

Categories: FLOSS Project Planets
Syndicate content