As one of the largest Drupal agencies in the world FFW is no stranger to problems of scale. With large numbers of technical staff, clients, and concurrent projects, workflow management is vitally important to our work. And to deliver projects on time, while managing resources with agility, consistency and simplicity in the tools we choose plays a huge part.
When there are no standards for the tools a team uses (OS, editor, server, php version, etc.) dealing with the toolset adds unnecessary overhead that can eat away development time. You'll quickly find that setting up projects, on-boarding developers, troubleshooting, and even training all become more difficult as you deal with larger projects, larger teams, and more complex requirements.
To help solve these problems FFW created Drude.What is Drude?
Drude (Drupal Development Environment) is a management tool for defining and managing development environments. It brings together common development tools, minimizes configuration, and ensures environment consistancy everywhere in your continuous integration worlflow. It automatically configures each project's environment to ensure team members are using the same tools, and versions, regardless of the individual requirements for each project. Most importantly, it makes the entire process easy.
With Drude you get fully containerized environments with Docker, cross-platform support (MacOS, Windows, and Linux,) built-in tools like drush, Drupal Console, composer, and PHP Code Sniffer, plug and play services like Apache Solr, Varnish, and Memcache, and even built-in testing support using Behat and Selenium. Drude will even automatically configure virtual hosts for you, so no more editing host files and server configurations.
With all of this you also get a management tool, which is the heart of Drude. dsh is a command line tool for controlling all aspects of your project's environment. You can use it to stop and start containers, interact with the host virtual machine, use drush and console to execute commands directly against your Drupal sites, and manage the installation and updating of projects.Let's see how this works
Download the Drude shell command
sudo curl -L https://raw.githubusercontent.com/blinkreaction/drude/master/bin/dsh -o /usr/local/bin/dsh sudo chmod +x /usr/local/bin/dsh
You can now use the dsh command. Use it to install prerequisites, which includes Docker, Vagrant, and VirtualBox.
dsh install prerequisites
dsh install boot2docker
These are all one-time steps for setting up Drude. Now that's done, you only need to set up individual projects. To demonstrate how this works we have Drupal 7 and 8 test projects available. Check their GitHub pages for additional setup instructions, in case the below instructions don’t work for you.
Clone the Drupal 8 test project.
git clone https://github.com/blinkreaction/drude-d8-testing.git
Use the init command to initialize local settings and install the Drupal site via Drush.
User name: admin User password: 5r58daY2vZ [ok]
Congratulations, you installed Drupal!
Open http://drupal8.drude in your browser to verify the setup.
The init script automates provisioning, which can be modified per project. It can initialize settings for provisioned services, import databases, install sites, compile Sass, revert features, enable or disable modules, run Behat tests, and many other things.
Now, simply point your browser to http://drupal8.drude
That’s it! Any time a team member wants to participate in a project all they have to do is download the project repo and run the init command. And with the environments containerized, they can be deployed anywhere.Why publicize all this?
Clearly, we've put in a lot of work building a great tool. One that we could easily keep to ourselves. Well, at FFW we are huge supporters of open-source. As one of the main supporters of the Drupal Console project, and a major supporter of Drupal, we believe that benefiting the community as a whole benefits us exponentially in return. We encourage anyone to use this tool, provide feedback, and even contribute to the project.Tagged with Comments
Free Software communities produce tons of great software. This software drives innovation and enables everybody to access and use computers, whether or not they can afford new hardware or commercial software. So that’s that, the benefit to society is obvious. Everybody should just get behind it and support it. Right? Well, it is not that easy. Especially when it comes to principles of individual freedom or trade-offs between self-determination and convenience, it is difficult to communicate the message in a way that it reaches and activates a wider audience. How can we explain the difference between Free Software and services available at no cost (except them spying at you) best? Campaigning for software freedom is not easy. However, it is part of the Free Software Foundation Europe’s mission. The FSFE teamed up with Peng! Collective to learn how to run influential campaigns to promote the cause of Free Software. The Peng Collective is a Berlin based group of activists who are known for their successful and quite subversive campaigns for political causes. And Endocode? Endocode is a sponsor of the Free Software Foundation Europe. We are a sponsor because free software is essential to us, both as a company and as members of society. And so here we are.
There are some exciting, courageous and engaging campaigns that focus on communicating complex political goals. The escape helpers campaign leaves the audience conflicted between the two choices of being a good human rights activists (driven by ideals and demonstrating solidarity with refugees) and being a good citizen (by abiding the law). Great, because the message is to re-think what is legal against what is right.The #slamshell performance emotionally demonstrated the risks associated with oil drilling that are normally regarded as marginal.
These campaigns translate abstract, distant risks or worries into concrete, tangible calls to action. By being provocative, they break the mold and reach a wide audience online and through traditional media. They are “cat content for social change”, as our tutors put it. Campaigners are being urged to stop preaching or complaining, and to start using positive communication combined with subversive PR work instead. Such messaging needs punchlines, which requires some kind of hyperbole – dadaism, hijacking attention, or provocation.
Campaign development is still a pretty down to earth task. Through fact finding research and the analysis of campaign goals, supporting allies and potential opponents, answers to the four essential questions are being narrowed down: What is the change that we want to achieve? How can this change be brought about? Who can make that change we want to see? And who has power over the involved people or groups? Setting campaign goals is often a compromise between achieving big changes locally or small changes “globally”. It helps to envision the impact of the campaign through utopia/dystopia brainstorms: What would a world look like where all campaign goals have been achieved perfectly? What would it look like if everything went horribly wrong? These kind of mental exercises also help to explain the relevance of the campaign goals and show how the intended change can affect people’s lives. The goals may be perfectly obvious to those passionate about them already, but not to outsiders – a common problem regarding the ethics and ideals of Free Software.
Implementing a campaign involves many standard, by the book project management tasks. The individual publicity stunts and activities are the actions that form the campaign timeline. A dilemma specific to the FSFE is that the relevant and influential media – social networks especially – are the kind of centralized proprietary platforms against which we are advocating. However, we learned that it may be possible to play this situation to our advantage :-) Since the FSFE’s goals require some heavy lifting of Free Software lobbying, the campaign timeline extends far into the future. We found ourselves thinking about what to present at conferences a year or more into the future. Finalizing the campaign plan involves answering the “classical” question of what time, material and talent is required to perform the tasks, and to put them into a timeline. Often this includes outside help for extra manpower or professional expertise. Noticeably, those with technical backgrounds tend to haste towards a release, underestimating the lead time required to get there, and the duration of the campaign. This tendency works almost, but not quite, entirely unlike in software projects. Securing and confirming the support of allies and protagonists also takes time.
The planned actions need to be reviewed with a focus group that resembles or at least understands the target audience. This review should confirm that the message conveyed is in fact understandable and makes sense. It is not possible to get a clear answer on whether or not a campaign project needs an ultimate decision maker. The answer depends too much on the composition of the campaign team and the timeline of the project. The necessary communication infrastructure is pretty straightforward – tasks boards, and instead and asynchronous messaging. Most Free Software groups use those anyway.
After two and a half days of workshop, all 15 participants ended up rather tired. However we had plenty of fun and learned a lot. Surprisingly, the group came up with a good amount of real, usable ideas for activities. Be very afraid :-) The guidance and mentoring by the experienced campaigners from Peng! Collective helped tremendously. Of course the workshop was merely an exercise in how to develop and run a campaign for software freedom. The bulk of the work is now ahead of us. But we are off to a good start. We are curious where this road will take us.
Filed under: CreativeDestruction, English, FLOSS, KDE, Linux, OSS Tagged: Creative Destruction, FLOSS, free software communities, free software foundation europe, freie software, FSFE, Linux
Acquia Developer Center Blog: Both Sides Now: What I Learned When I Jumped from the Supplier Side to the Client Side on the Same Project
Shortly after graduating, I found myself an internship with miggle, a UK-based Web development specialist that uses Drupal exclusively.
As a web production assistant, my primary role required me to look after the latter stages of a new website rebuild for an organization in the education sector.Tags: acquia drupal planet
On April 23, I received the Congratulation mail from Google.
I hadnt known about the GSoC until this event https://www.facebook.com/events/1555921851402308/
As you can see, that event took place right before the GSoC student application deadline. After going home from the event, I was researching a lot about the organizations and projects. I was nervous about choosing a project. So many projects and requirements …
After hours and hours, I finally decided to writing a proposal for this project: https://summerofcode.withgoogle.com/projects/#4616126801641472
For now I still think my proposal isnt good enough.
My mentor, Ingo, is very kind and he’ve helped me so much till now. His story about this project is also interesting .
My proposal has been accepted. I was happy and suprised.
Now let’s your code rock the summer.
With Neon’s infrastructure moving ahead nicely I’ve been able to update my Plasma Wayland images to build on the Neon infrastructure fresh each day. It uses packages built from KDE Frameworks and Plasma Git master branches and has a default session of KWin running as a Wayland compositor. The foundation is Ubuntu 16.04LTS.
follow the 1 step instruction on Martin’s blog if you already have Neon Dev edition unstable installed.
As usual it doesn’t work in Virtual Box. Rendering errors appear often, especially with a second monitor. Control-c doesn’t work on konsole. Otherwise it’s perfect
2nd May 2016: amazee.io just launched their Drupal hosting platform built for develeopers, which has a full integration into Drop Guard. And that’s when our common story started.
The Amazee team just dedicate themselves to the Drupal world: “We’re a secure, high-performance, cloud-based hosting solution built for folks who love their Drupal sites as much as we do”.Drupal Planet Drupal Success Story Amazee Security Hosting
PyCharm’s visual debugger is one its most powerful and useful features. The debugger got a big speedup in the recent PyCharm, and has an interesting backstory: JetBrains collaborated with PyDev, the popular Python plugin for Eclipse, and funded the work around performance improvements for the common debugger’s backend.
FZ: The performance has always been a major focus of the debugger. I think that’s actually a requisite for a pure-python debugger.
To give an example here: Python debuggers work through the Python tracing facility (i.e.: sys.settrace), by handling tracing calls and deciding what to do at each call.
Usually a debugger would be called at each step to decide what to do, but pydevd is actually able to completely disable the tracing for most contexts (any context that doesn’t have a breakpoint inside it should run untraced) and re-evaluate its assumptions if a breakpoint is added.
Now, even having the performance as a major focus, the latest release was still able to give really nice speedups (the plain Python version had a 40% speed improvement overall while the Cython version had a 140% increase).
I must say that at that point, there weren’t any low-hanging fruits for speeding up the debugger, so, the improvement actually came from many small improvements and Cython has shown that it can give a pretty nice improvement given just a few hints to it.
DT: The performance of the debugger was one of the top voted requests in PyCharm tracker. The latest release addresses this by implementing some parts of the debugger in Cython, which leads to huge performance improvements on all type of projects.Was the Cython decision an easy one?
FZ: Actually, yes, it was a pretty straightforward decision…
The main selling point is that the Cython version is very similar to the Python version, so, the same codebase is used for Cython and plain Python code — the Cython version is generated from the plain Python version by preprocessing it with a mechanism analogous to #IFDEF statements in C/C++.
DT: The idea was to make the debugger faster by rewriting the bottlenecks in C, but at the same time optional to have any compiled binaries, so that pure Python version would still work. Also, it was desirable to have as little code duplication as possible. Cython let us do all that perfectly, so it was a natural decision.Let’s take a step back and discuss the 2014 decision to merge efforts. How did this conversation get started?
FZ: I was doing a crowdfunding for PyDev which had a profiler as one of its main points, which was something that PyCharm wanted to add too. Although the initial proposal didn’t come through, we started talking about what we already had in common, which was the debugger backend and how each version had different features at that point. I think PyCharm had just backported some of the changes I had done in the latest PyDev version at that time to its fork, and we agreed it would be really nice if we could actually work in the same codebase.
DT: We have used the fork of Pydev debugger since the beginning of the PyCharm and occasionally I would check what was going in Pydev branch to backport features and fixes from there to PyCharm. Meanwhile, Fabio does the same, taking the latest fixes from PyCharm branch. As time passed and branches diverged, it was getting more and more difficult to compare the branches and backport fixes from one another.
After one of the tough merges, I thought, maybe we’d better create a common project that would be used in both IDEs. So I decided to contact Fabio and was very happy when he supported the idea.Did the merging/un-forking go as you planned, or were there technical or project challenges?
FZ: The merging did go as planned…
The main challenge was the different feature set each version had back then. For instance, PyDev had some improvements on dealing with exceptions, finding referrers, stackless and debugger reload, whereas PyCharm had things such as the multiprocessing, gevent and Django templates (and the final version had to support everything from both sides).
The major pain point on the whole merging was actually on the gevent support, because the debugger really needs threads to work and gevent has an option for monkey-patching the threading library, which made the debugger go haywire.
DT: The main challenge was to test all the fixes done for the PyCharm fork of the debugger for the possible regressions in the merged version. We had a set of tests for debugger, but the coverage, of course, wasn’t 100%. So we made the list of all debugger issues fixed for the last 3 years (around 150 issues,) and just tested them. That helped us to ensure that we won’t have regressions in a release.Fabio, how did it go on your end, having JetBrains sponsor some of your work? Any pushback in your community?
FZ: I must say I didn’t really have any pushback from the community. I’ve always been pretty open-minded about the code on PyDev (which was being used early on in PyCharm for the debugger) and I believe IDEs are a really personal choice. So I’m happy that the code I did can reach more people, even if not directly inside PyDev. Also, I think the community saw it as a nice thing as the improvements in the debugger made both, PyDev and PyCharm, better IDEs.The Python-oriented IDEs likely have some other areas where they face common needs. What do you think are some top issues for Python IDEs in 2016 and beyond?
FZ: I agree that there are many common needs on IDEs — they do have the same target after all, although with wildly different implementations
Python code in particular is pretty hard to analyze in real-time — which contrasts with being simple and straightforward to read — and that’s something all “smart” Python IDEs have to deal with, so, there’s a fine balance on performance vs. features there, and that’s probably always going to be a top issue in any Python IDE.
Unfortunately, this is probably also a place where it’s pretty difficult to collaborate as the type inference engine is the heart of a Python IDE (and it’s also what makes it unique in a sense as each implementation ends up favoring one side or the other).
DT: The dynamic nature of Python was always the main challenge for IDEs to provide an assistance to developers. A huge step forward was done with Python 3.5, by adding a type hinting notation and typeshed repository from which we will all benefit a lot. But still this thing is in its early stage and we need to define and learn effective ways to adopt type hinting.
Python performance is also a challenge. In the Python world, when you care about performance, you switch from using pure Python to libraries written in C, like numpy. Or you try pypy. But in both cases performance and memory profiling becomes hard or even impossible with current standard tools and libraries. I think that tool developers can collaborate on that to provide better instruments for measuring and improving the performance of Python apps.What’s in the future for pydevd, performance or otherwise?
FZ: I must say that performance wise, I think it has reached a nice balance on ease of development and speed, so, right now, the plan is not having any regression
Regarding new development, I don’t personally have any new features planned — the focus right now is on making it rock-solid!
DT: One of the additions to pydevd from the PyCharm side is the ability to capture the types of the function arguments in the running program. PyCharm tries to use this information for code completion, but this feature now is optional and off by default. With the new type hinting in Python 3.5 this idea gets a new spin and the types collected in run-time could be used to annotate functions with types or verify the existing annotations. We are currently experimenting only with types, but it could be taken further to analyse call hierarchy etc.
One of our OSTraining members wanted to restrict access to certain content on his Drupal 8 site.
To do this in Drupal 8, we are going to use the Content Access module.
To follow along with this tutorial, download and install Content Access. I found no errors while using this module, but please note that currently it is a dev release.
Today's third-party applications increasingly depend on web services to retrieve and manipulate data, and Drupal offers a range of web services options for API-first content delivery. For example, a robust first-class web services layer is now available out-of-the-box with Drupal 8. But there are also new approaches to expose Drupal data, including Services and newer entrants like RELAXed Web Services and GraphQL.
The goal of this blog post is to enable Drupal developers in need of web services to make an educated decision about the right web services solution for their project. This blog post also sets the stage for a future blog post, where I plan to share my thoughts about how I believe we should move Drupal core's web services API forward. Getting aligned on our strengths and weaknesses is an essential first step before we can brainstorm about the future.
The Drupal community now has a range of web services modules available in core and as contributed modules sharing overlapping missions but leveraging disparate mechanisms and architectural styles to achieve them. Here is a comparison table of the most notable web services modules in Drupal 8:Feature Core REST RELAXed Services Content entity CRUD Yes Yes Yes Configuration entity CRUD Create resource plugin (issue) Create resource plugin Yes Custom resources Create resource plugin Create resource plugin Create Services plugin Custom routes Create resource plugin or Views REST export (GET) Create resource plugin Configurable route prefixes Renderable objects Not applicable Not applicable Yes (no contextual blocks or views) Translations Not yet (issue) Yes Create Services plugin Revisions Create resource plugin Yes Create Services plugin File attachments Create resource plugin Yes Create Services plugin Shareable UUIDs (GET) Yes Yes Yes Authenticated user resources (log in/out, password reset) Not yet (issue) No User login and logout Core RESTful Web Services
Thanks to the Web Services and Context Core Initiative (WSCCI), Drupal 8 is now an out-of-the-box REST server with operations to create, read, update, and delete (CRUD) content entities such as nodes, users, taxonomy terms, and comments. The four primary REST modules in core are:
- Serialization is able to perform serialization by providing normalizers and encoders. First, it normalizes Drupal data (entities and their fields) into arrays with a particular structure. Any normalization can then be sent to an encoder, which transforms those arrays into data formats such as JSON or XML.
- RESTful Web Services allows for HTTP methods to be performed on existing resources including but not limited to content entities and views (the latter facilitated through the “REST export" display in Views) and custom resources added through REST plugins.
- HAL builds on top of the Serialization module and adds the Hypertext Application Language normalization, a format that enables you to design an API geared toward clients moving between distinct resources through hyperlinks.
- Basic Auth allows you to include a username and password with request headers for operations requiring permissions beyond that of an anonymous user. It should only be used with HTTPS.
Core REST adheres strictly to REST principles in that resources directly match their URIs (accessible via a query parameter, e.g. ?_format=json for JSON) and in the ability to serialize non-content into JSON or XML representations. By default, core REST also includes two authentication mechanisms: basic authentication and cookie-based authentication.
While core REST provides a range of features with only a few steps of configuration there are several reasons why other options, available as contributed modules, may be a better choice. Limitations of core REST include the lack of support for configuration entities as well as the inability to include file attachments and revisions in response payloads. With your help, we can continue to improve and expand core's REST support.RELAXed Web Services
As I highlighted in my recent blog post about improving Drupal's content workflow, RELAXed Web Services, is part of a larger suite of modules handling content staging and deployment across environments. It is explicitly tied to the CouchDB API specification, and when enabled, will yield a REST API that operates like the CouchDB REST API. This means that CouchDB integration with client-side libraries such as PouchDB and Hood.ie makes possible offline-enabled Drupal, which synchronizes content once the client regains connectivity. Moreover, people new to Drupal with exposure to CouchDB will immediately understand the API, since there is robust documentation for the endpoints.
RELAXed Web Services depends on core's REST modules and extends its functionality by adding support for translations, parent revisions (through the Multiversion module), file attachments, and especially cross-environment UUID references, which make it possible to replicate content to Drupal sites or other CouchDB compatible services. UUID references and revisions are essential to resolving merge conflicts during the content staging process. I believe it would be great to support translations, parent revisions, file attachments, and UUID references in core's RESTful web services — we simply didn't get around to them in time for Drupal 8.0.0.Services
Since RESTful Web Services are now incorporated into Drupal 8 core, relevant contributed modules have either been superseded or have gained new missions in the interest of extending existing core REST functionality. In the case of Services, a popular Drupal 7 module for providing Drupal data to external applications, the module has evolved considerably for its upcoming Drupal 8 release.
With Services in Drupal 8 you can assign a custom name to your endpoint to distinguish your resources from those provisioned by core and also provision custom resources similar to core's RESTful Web Services. In addition to content entities, Services supports configuration entities such as blocks and menus — this can be important when you want to build a decoupled application that leverages Drupal's menu and blocks system. Moreover, Services is capable of returning renderable objects encoded in JSON, which allows you to use Drupal's server-side rendering of blocks and menus in an entirely distinct application.
At the time of this writing, the Drupal 8 version of Services module is not yet feature-complete: there is no test coverage, no content entity validation (when creating or modifying), no field access checking, and no CSRF protection, so caution is important when using Services in its current state, and contributions are greatly appreciated.GraphQL
GraphQL, originally created by Facebook to power its data fetching, is a query language that enables fewer queries and limits response bloat. Rather than tightly coupling responses with a predefined schema, GraphQL overturns this common practice by allowing for the client's request to explicitly tailor a response so that the client only receives what it needs: no more and no less. To accomplish this, client requests and server responses have a shared shape. It doesn't fall into the same category as the web services modules that expose a REST API and as such is absent from the table above.
GraphQL shifts responsibility from the server to the client: the server publishes its possibilities, and the client publishes its requirements instead of receiving a response dictated solely by the server. In addition, information from related entities (e.g. both a node's body and its author's e-mail address) can be retrieved in a single request rather than successive ones.
Typical REST APIs tend to be static (or versioned, in many cases, e.g. /api/v1) in order to facilitate backwards compatibility for applications. However, in Drupal's case, when the underlying content model is inevitably augmented or otherwise changed, schema compatibility is no longer guaranteed. For instance, when you remove a field from a content type or modify it, Drupal's core REST API is no longer compatible with those applications expecting that field to be present. With GraphQL's native schema introspection and client-specified queries, the API is much less opaque from the client's perspective in that the client is aware of what response will result according to its own requirements.
I'm very bullish on the potential for GraphQL, which I believe makes a lot of sense in core in the long term. I featured the project in my Barcelona keynote (demo video), and Acquia also sponsored development of the GraphQL module (Drupal 8 only) following DrupalCon Barcelona. The GraphQL module, created by Sebastian Siemssen, now supports read queries, implements the GraphiQL query testing interface, and can be integrated with Relay (with some limitations).Conclusion
For most simple REST API use cases, core REST is adequate, but core REST can be insufficient for more complex use cases. Depending on your use case, you may need more off-the-shelf functionality without the need to write a resource plugin or custom code, such as support for configuration entity CRUD (Services); for revisions, file attachments, translations, and cross-environment UUIDs (RELAXed); or for client-driven queries (GraphQL).
Special thanks to Preston So for contributions to this blog post and to Moshe Weitzman, Kyle Browning, Kris Vanderwater, Wim Leers, Sebastian Siemssen, Tim Millwood and Ted Bowman for their feedback during its writing.
The machine is a complete ARM-based PC with micro HDMI, SATA, USB plugs and many others connectors, and include a full keyboard and a 5" LCD touch screen. The 6000mAh battery is claimed to provide a whole day of battery life time, but I have not seen any independent tests confirming this. The vendor is still collecting preorders, and the last I heard last night was that 22 more orders were needed before production started.
As far as I know, this is the first handheld preinstalled with Debian. Please let me know if you know of any others. Is it the first computer being sold with Debian preinstalled?
Jessie was released one year ago now and the Java Team has been busy preparing the next release. Here is a quick summary of the current state of the Java packages:
- A total of 136 packages have been added, 63 removed, 213 upgraded to a new upstream release, and 145 updated. We are now maintaining 892 packages (+12.34%).
- OpenJDK 8 is now the default Java runtime in testing/unstable. OpenJDK 7 has been removed, as well as several packages that couldn't be upgraded to work with OpenJDK 8 (avian, eclipse).
- OpenJDK 9 is available in experimental. As a reminder, it won't be part of the next release; OpenJDK 8 will be the only Java runtime supported for Stretch.
- Netbeans didn't make it into Jessie, but it is now back and up to date.
- The main build tools are close to their latest upstream releases, especially Maven and Gradle which were seriously lagging behind.
- Scala has been upgraded to the version 2.11. We are looking for Scala experts to maintain the package and its dependencies.
- Freemind has been removed due to lack of maintenance, Freeplane is recommended instead.
- The reproducibility rate has greatly improved, climbing from 50% to 75% in the past year.
- Backports are continuously provided for the key packages and applications: OpenJDK 8, OpenJFX, Ant, Maven, Gradle, Tomcat 7 & 8, Jetty 8 & 9, jEdit.
- The transition to Maven 3 has been completed, and packages are no longer built with Maven 2.
- We replaced several obsolete libraries and transitioned them to their latest versions - for example, asm2, commons-net1 and commons-net2. Groovy 1.x was replaced with Groovy 2, and we upgraded BND, an important tool to develop with OSGi, and more than thirty of its reverse-dependencies from the 1.x series to version 2.4.1.
- New packaging tools have been created to work with Gradle (gradle-debian-helper) and Ivy (ivy-debian-helper).
- We have several difficult transitions ahead: BND 3, Tomcat 7 to 8, Jetty 8 to 9, ASM 5, and of course Java 9. Any help would be welcome.
- Eclipse is severely outdated and currently not part of testing. We would like to update this important piece of software and its corresponding modules to the latest upstream release, but we need more active people who want to maintain them. If you care about the Eclipse ecosystem, please get in touch with us.
- We still are in the midst of removing old libraries like asm3, commons-httpclient and the servlet 2.5 API, which is part of the Tomcat 6 source package.
- Want to see Azureus/Vuze in Stretch again? Packaging is almost complete but we are looking for someone who can clarify remaining licensing issues with upstream and wants to maintain the software for the foreseeable future.
- Do you have more ideas and want to get involved with the Java Team? Just send your suggestions to firstname.lastname@example.org or chat with us on IRC at irc.debian.org, #debian-java.
- The Java Team is not the only team that maintains Java software in Debian. DebianMed, DebianScience and the Android Tools Maintainers rely heavily on Java. By helping the Java Team and working together, you can improve the Java ecosystem and further the efforts of multiple other fields of endeavor all at once.
The packages listed below detail the changes in jessie-backports and testing. Libraries and Debian specific tools have been excluded.
Packages added to jessie-backports:
- ant (1.9.7)
- elasticsearch (1.6.2)
- gradle (2.10)
- groovy2 (2.4.5)
- japi-compliance-checker (1.5)
- jedit (5.3.0)
- jetty8 (8.1.19)
- jetty9 (9.2.14)
- maven (3.3.9)
- openjdk-7-jre-dcevm (7u79)
- openjdk-8 (8u72-b15)
- openjfx (8u60-b27)
- tomcat7 (7.0.69)
- tomcat8 (8.0.32)
Packages removed from testing:
Packages added to testing:
- apache-directory-server (2.0.0~M15)
- dokujclient (3.8.1)
- elasticsearch (1.7.3)
- ivyplusplus (1.14)
- jetty9 (9.2.16)
- netbeans (8.1)
- openjdk-8 (8u91-b14)
- openjdk-8-jre-dcevm (8u74)
- openjfx (8u60-b27)
Packages upgraded in testing:
- activemq (5.13.2)
- ant (1.9.7)
- aspectj (1.8.9)
- bnd (2.4.1)
- checkstyle (6.15)
- eclipse-gef (3.9.100)
- electric (9.06)
- felix-main (5.0.0)
- findbugs (3.0.1)
- fop (2.1)
- freeplane (1.3.15)
- gant (1.9.11)
- gradle (2.10)
- groovy2 (2.4.5)
- hsqldb (2.3.3)
- icedtea-web (1.6.2)
- ivy (2.4.0)
- jajuk (1.10.9)
- jakarta-jmeter (2.13)
- japi-compliance-checker (1.7)
- jasmin-sable (2.5.0)
- java-common (0.57)
- java-package (0.61)
- jedit (5.3.0)
- jetty8 (8.1.19)
- jftp (1.60)
- jgit (3.7.1)
- jruby (1.7.22)
- jtreg (4.2-b01)
- libapache-mod-jk (1.2.41)
- maven (3.3.9)
- nailgun (0.9.1)
- pleiades (1.6.0)
- proguard (5.2.1)
- robocode (184.108.40.206)
- sablecc (3.7)
- scala (2.11.6)
- service-wrapper-java (3.5.26)
- simplyhtml (0.16.13)
- svnkit (1.8.12)
- sweethome3d-textures-editor (1.4)
- tomcat-native (1.1.33)
- tomcat7 (7.0.69)
- tomcat8 (8.0.32)
- triplea (220.127.116.11)
- uimaj (2.8.1)
- weka (3.6.13)
- zookeeper (3.4.8)
Drupal Watchdog was founded in 2011 by Tag1 Consulting as a resource for the Drupal community to share news and information. Now in its sixth year, Drupal Watchdog is ready to expand to meet the needs of this growing community.
Drupal Watchdog will now be published by Linux New Media, aptly described as the Pulse of Open Source.
“It’s very clear that the folks at Linux New Media know what they’re doing, and that they truly value the open source culture,” said Jeremy Andrews, CEO/Founding Partner, Tag1 Consulting. “I’m ecstatic that the magazine will not just live on, but it will thrive as a quarterly publication … this is a wonderful step forward that benefits everyone who reads and contributes to Drupal Watchdog.”
The magazine will continue to be offered in print and digital formats, and Linux New Media’s international structure provides better service to subscribers worldwide, with local offices in North America and Europe and ordering options in various local currencies.
“We don’t want to change what has brought Drupal Watchdog this far, but we do want to see it grow and expand to the next level, which mainly means – extending the reach of the magazine,” said Brian Osborn, CEO and Publisher, Linux New Media. “As our first step, Drupal Watchdog will now be published quarterly, helping us stay even more current in our coverage and in more frequent contact with our readership.”
Drupal Watchdog is written for the Drupal community and will only thrive through community participation.
Here is what you can do to help:
- Join the Drupal Watchdog community on Facebook
- Follow Drupal Watchdog on Twitter
- Visit the Drupal Watchdog team at DrupalCon
- Subscribe to the magazine so you won’t miss an issue
- Provide your feedback through our reader survey at drupalwatchdog.com/reader-survey
The first issue of Drupal Watchdog published by Linux New Media will be available May 9th! All DrupalCon attendees will receive a copy at the event. Come meet the new team, and learn more about the future of Drupal Watchdog!Images:
We're close to a month before the next PyCon Conference in Portland, Oregon. We are organizing our 58th meetup at our lovely UQAM. Join us if you would like to feel what the Python community in Montreal is doing.
As usual we are receiving some guests in both languages and they will present you their projects and realizations.
Don't forget to join us after the meetup at the Benelux to celebrate spring in our lovely city.Flash presentations
Kate Arthur: Kids CODE Jeunesse
Kids Code Jeunesse is dedicated to giving every Canadian child the chance to learn to code and to learn computational thinking. We introduce educators, parents and communities to intuitive teaching tools. We work in classrooms, community centres, host events and give workshops to supporting engaging educational experiences for everyone.
Christophe Reverd: Club Framboise (http://clubframboise.ca/)
Présentation du Club Framboise, la communauté des utilisateurs de Raspberry Pi à MontréalMain presentations
Vadim Gubergrits: DIY Quantum Computer
An introduction to Quantum Computing with Python.
Pascal Priori: santropol-feast: Savoir faire Linux et des bénévoles accompagnent le Santropol Roulant (https://github.com/savoirfairelinux/santropol-feast)
Dans le cadre de la maison du logiciel libre, Savoir faire Linux et des bénévoles accompagnent le Santropol Roulant, un acteur du milieu communautaire montréalais dans la réalisation d'une plateforme de gestion de la base de donnée des clients en Django. En effet, au cœur des activités du Santropol Roulant, il y a le service de popote roulante qui cuisine, prépare et livre plus d’une centaine de repas chauds chaque jour à des personnes en perte d’autonomie. La base de données des clients joue un rôle clé dans la chaîne de services Réalisé en Django, le projet est à la recherche de bénévoles ayant envie de s'engager et contribuer au projet pour poursuivre le développement de la plateforme!
George Peristerakis: How CI is done in Openstack
In George's last talk, there was a lot of questions on the details of integrating code review and continuous integration in Openstack. This talk is a followup on the process and the technology behind implementing CI for Openstack.Where
201, Président-Kennedy avenue
Monday, May 9th 2016Schedule
- 6:00pm — Doors open
- 6:30pm — Presentations start
- 7:30pm — Break
- 7:45pm — Second round of presentations
- 9:00pm — End of the meeting, have a drink with us
- Savoir-faire Linux
Kaggle competitions are a fantastic way to learn data science and build your portfolio. I personally used Kaggle to learn many data science concepts. I started out with Kaggle a few months after learning programming, and later won several competitions.
Many people might not be aware of it, but since a couple of years ago, we have an excellent tool for tracking and recognising contributors to the Debian Project: Debian Contributors
Debian is a big project, and there are many people working that do not have great visibility, specially if they are not DDs or DMs. We are all volunteers, so it is very important that everybody gets credited for their work. No matter how small or unimportant they might think their work is, we need to recognise it!
One great feature of the system is that anybody can sign up to provide a new data source. If you have a way to create a list of people that is helping in your project, you can give them credit!
If you open the Contributors main page, you will get a list of all the groups with recent activity, and the people credited for their work. The data sources page gives information about each data source and who administers it.
For example, my Contributors page shows the many ways in which the system recognises me, all the way back to 2004! That includes commits to different projects, bug reports, and package uploads.
I have been maintaining a few of the data sources that track commits to Git and Subversion repositories:
- The Go packaging group (added just a couple of weeks ago).
- The Perl packaging group.
The last two are a bit problematic, as they group together all commits to the respective VCS repositories without distinguishing to which sub-projects the contributions were made.
The Go and Perl groups' contributions are already extracted from that big pile of data, but it would be much nicer if each substantial packaging team had their own data source. Sadly, my time is limited, so this is were you come into the picture!
If you are a member of a team, and want to help with this effort, adopt a new data source. You can be providing commit logs, but it is not limited to that; think of translators, event volunteers, BSP attendants, etc.
In Drupal Commerce 1.x, we used the Commerce Fancy Attributes and Field Extractor modules to render attributes more dynamically than just using simple select lists. This let you do things like show a color swatch instead of just a color name for a customer to select.
Fancy Attributes on a product display in Commerce Kickstart 2.x.
In Commerce 2.0-alpha4, we introduced specific product attribute related entity types. Building on top of them and other contributed modules, we can now provide fancy attributes out of the box! When presenting the attribute dropdown, we show the labels of attribute values. But since attribute values are fieldable, we can just as easily use a different field to represent it, such as an image or a color field. To accomplish this, we provide a new element type that renders the attribute value entities as a radio button option.
Read more to see an example configuration.
Prompted by Tollef, moving to Hugo, I investigated a replacement blog engine. The former site used Wordpress which is just overhead - my blog doesn't need to be generated on every view, it doesn't need the security implications of yet another website login and admin interface either.
So, I've chosen Pelican with the code living in a private git repo, naturally. I wanted a generator that was supported in Jessie. I first tried nikola but it turns out that nikola in jessie has syntax changes. I looked at creating backports but then there is a new upstream release which adds a python module not yet in Debian, so that would be an extra amount of work.
Hopefully, this won't flood planet - I've gone through the RSS content to update timestamps but the URLs have changed.
With the release of Drupal 8.1 on April 20th the BigPipe module was added to core to increase the speed of Drupal for anonymous and logged in visitors.What does BigPipe do in general?
BigPipe is a technique to render a webpage in phases. It uses components to create the complete page ordered by the speed of the components themselves. This technique gives the visitors a feeling that the website is faster then it may actually be. Thus giving a boost in user experience.
This technique, originally developed by Facebook, deploys the theory of multi threading, just like processors do. It disperses multiple calls to a single backend to make full use of the web server and thus rendering a webpage faster then conventional rendering does.What does BigPipe do in Drupal?
For “normal” websites with anonymous visitors, BigPipe doesn’t do much. If you use a caching engine like Varnish, or even Drupal cache itself, pages are generally rendered fast enough. When using dynamic content like lists of related, personalized or localized content BigPipe can kick in and really make a difference. When opening the website BigPipe returns the page skeleton that can be cached. Elements like the menus, footer, header and often even content. And then rendering of the dynamic content will start. This means that the visitor of your website is already reading the most import content, and is able to see the dynamic related list later on after it’s loaded asynchronously.
For websites with logged in users BigPipe can be a real boost in performance. Standard Drupal cache doesn’t work out of the box for logged in users. For Drupal 7 you had the Authenticated User Page Caching (Authcache) module (which had some disadvantages), but for Drupal 8 there was nothing. Until Drupal 8.1!
With BigPipe Drupal is now able to cache certain parts of the page (the skeleton which I mentioned above) and to multithread some other parts. And these parts are cacheable by themselves.
Video is made by Dries BuytaertBigPipe in Drupal
As I said, starting from Drupal 8.1. BigPipe is added as a core module. And everybody can use it. Whether you are using a budget hosting platform or you are hosting your own website with state-of-the-art servers, it is basically just one (1) click away. You can just enable the module and get all the benefits BigPipe has to offer!
Each day, more Drupal 7 modules are being migrated over to Drupal 8 and new ones are being created for the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some of the most most prominent, useful modules, projects, and tools available for Drupal 8. This week: Display Suite.Tags: acquia drupal planetdsdisplay suitedrag and dropuiUX