FLOSS Project Planets

LakeDrops Drupal Consulting, Development and Hosting: Migrate to Drupal 8

Planet Drupal - Tue, 2017-03-21 15:54
Migrate to Drupal 8 Richard Papp Tue, 03/21/2017 - 20:54 The first meeting of the Drupal User Group Bodensee (Lake Constance) in 2017 will be about migration to Drupal 8.
Categories: FLOSS Project Planets

Community Over Code: Shane’s Apache Director Position Statement, 2017

Planet Apache - Tue, 2017-03-21 15:20

The ASF is holding it’s annual Member’s Meeting next week to elect a new board and a number of new Members to the ASF.  I’m honored to have been nominated to stand for the board election, and I’m continuing my tradition of publicly posting my vision for Apache each year.

Please read on for my take on what’s important for the ASF’s future…

Shane’s Director Position Statement 2017 v1.0

If you want a director who will keep the board focused on being clear, consistent, and polite; who will provide oversight for independent governance for our projects; who will help the board improve our shared strategic vision for growth while delegating effectively to the officers and volunteers who provide services to our projects, then I ask you to vote for me.

What We Need In A Board

We are lucky to have candidates who all have immense amounts of passion for the ASF and experience in the Apache Way of doing things. But that’s not enough to make an effective board. We need directors who can work well together, and who can work well when speaking to all the other parts of the Foundation: with our corporate operations (infra, brand, legal, press, fundraising, and even our vendors and sponsors), and with the thousands of volunteers working in Apache project communities.

The board needs to focus on providing the independent oversight for everything we do. That independence from corporate influence is the most important part of what makes the ASF different. That oversight should be trust but verify. We trust that our projects will do the right thing, and verify by reading their quarterly reports. Only if something seems wrong does the board speak up – and then, to ask the community to self-correct. Only if a project community can’t self-correct does the board take formal action.

We need a board that will give the officers, staff, and volunteers who run our non-project corporate operations the same respect and trust as we do our projects. Since we rely wholly on unpaid volunteers to govern organizational decisions, the board needs to ensure officers have a safe, consistent, and clearly defined space to do all the “paperwork” that keeps our legal corporation running. Since all corporate officers provide monthly reports, the board has plenty of visibility to what they do.

When the board has questions or advice – or when directors have questions – they need to ensure it’s brought into project communities clearly, concisely, and professionally. The organizational aspects of providing oversight are often not the day-to-day work that committers are doing on their project codebase. When the board (or any officer) jumps into a project community, we need to explain both how things should work at Apache, but also the why they work that way.

I hope that I’ve shown this kind of behavior in the past; if I haven’t, please let me know. Keeping our communities welcoming is important.

What Shane Does At Apache

For those who don’t follow Apache operations on a regular basis, here are some of the places where I’ve worked to take the tribal knowledge of our mailing lists, and better explain it to both our communities and the world at large:

I’ve served on the board for several terms, and serve as VP, Brand Management. I’m hoping to get back to coding on Apache PonyMail. My first mail to an Apache list was in November 1999.

If elected, I will
  • Attend every board meeting
  • Ensure that there is clear, consistent, and polite feedback from the board to projects
  • Work to promote constructive, polite, and efficient working environments for our staff and all our community volunteers
  • Speak at every ApacheCon (if they accept my CFPs!)
  • Be available to speak or meet with Apache projects or meetups in the New England area or other conferences I attend
About Shane

I am currently unemployed and hold no allegiance other than to the ASF (and my family!) I will not accept a job that would compromise my ability to act in the best interests of the ASF. I live with my wife, daughter, and four cats.


The post Shane’s Apache Director Position Statement, 2017 appeared first on Community Over Code.

Categories: FLOSS Project Planets

MidCamp - Midwest Drupal Camp: Call for Volunteers

Planet Drupal - Tue, 2017-03-21 15:12
We need you!

Want to give back to the Drupal Community without writing a line of code? Volunteer to help out at MidCamp 2017.  We’re looking for people to help with all kinds of tasks including: 

  • For setup, we need help making sure registration is ready to roll, and getting T-shirts ready to move.

  • For teardown, we need to undo all the setup including packing up all the rooms, the registration desk, cleaning signage, and making it look like we were never there.

Registration and Ticketing
  • We need ticket scanners, program dispersers, and people to answer questions.

Room Monitors
  • Pick your sessions and count heads, make sure the speakers have what they need to survive, and help with the in-room A/V

If you’re interested in volunteering or would like to find out more, please contact us.


Categories: FLOSS Project Planets

WireGuard in Google Summer of Code

Planet KDE - Tue, 2017-03-21 14:52

WireGuard is participating in Google Summer of Code 2017. If you're a student who would like to be funded this summer for writing interesting kernel code, studying cryptography, building networks, or working on a wide variety of interesting problems, then this might be appealing. The program opened to students on March 20th. If you're applying for WireGuard, choose "Linux Foundation" and state in your proposal that you'd like to work on WireGuard with "Jason Donenfeld" as your mentor.

Categories: FLOSS Project Planets

Acquia Lightning Blog: Forward Revisions and Translated Content

Planet Drupal - Tue, 2017-03-21 14:46
Forward Revisions and Translated Content Adam Balsam Tue, 03/21/2017 - 14:46

Core contributors are currently working on a solution for #2766957 Forward revisions + translation UI can result in forked draft revisions. This issue can affect users of Workbench Moderation (that is, users of Lightning) too though.

The problem presents itself when:

  • The site uses Lightning Workflow
  • Content Translation is enabled with at least one additional language defined (let's say English and Spanish) 
  • A piece of content exists where:
    • There is a published English and a published Spanish version of the content.
    • Both the English and Spanish version have unpublished edits (AKA forward revisions).
  • An editor publishes the forward revision for either the English or Spanish version (let's say English).

The result is the existing published Spanish version becomes unpublished - even though the editor took no action on that version at all. This is because the system is marking the unpublished Spanish version as the default revision.

A workaround exists in the Content Translation Workflow module. If you are still using Drupal core 8.2.x (which, as of this writing, Lightning is) you will also need a core patch that adds a getLoadedRevisionId() method to ContentEntityBase.

Workaround Summary
  1. Apply this core patch.
  2. Add the Content Translation Moderation module to your codebase and enable it.

For more information and demonstration of the bug and the fix, see the video below.

Note: This is an alpha module with known issues and, by definition, is not covered by the Drupal Security policy and may have security vulnerabilities publicly disclosed.

Note: The Content Translation Workflow module works around the original issue by creating an additional revision based on the current default revision. This preserves existing forward revisions and their content, but effectively makes them past (rather than forward) revisions.

Bonus: The author of Content Translation Workflow, dawehner, has also created a companion module Content Translation Revision which adds a nice UI to translate individual revisions.

Categories: FLOSS Project Planets

Palantir: Competitive Analysis: The Key to a Woman's Healthy Heart - Part 1

Planet Drupal - Tue, 2017-03-21 14:45
Competitive Analysis: The Key to a Woman's Healthy Heart - Part 1 brandt Tue, 03/21/2017 - 13:45 Michelle Jackson Mar 21, 2017

In the healthcare field, meeting the needs of patients can be a matter of life and death.

In this post we will cover...
  • How health systems can conduct competitive analysis

  • How navigation organization and prioritization impacts the ability of people to find information on specific health topics such as heart disease and its impact on women’s health

  • How competitive analysis can help health systems conduct a cursory evaluation and improve information architecture to better serve people suffering from critical illnesses

  • How looking at peer competitors can help health systems better serve the needs of patients and their caregivers

Strategy work is essential to a project's success.

Let's Chat.

Competitive analysis is an exercise, the importance of which transcends the borders of many industries, including healthcare. By taking a look at how your site compares to your competitors, you can ultimately make changes that allow you to better serve your patient’s specific needs.

In recognition of Women’s History Month, we are focusing on women’s health, specifically heart disease, the number one cause of death for women in the United States. We are also honing in on on DrupalCon-host city Baltimore, which has launched several initiatives to combat cardiovascular disease. The goal is to take a look at how two health systems in Charm City categorize and present information about cardiovascular disease on their public-facing websites.

Let’s imagine you have been tasked by the American Heart Association (AHA) to compare and evaluate websites of local health systems in the field of cardiology in how they serve women patients who suffer from cardiovascular disease. Where do we begin? What competitors will we look at? What dimensions or features/site attributes are we comparing? What key tasks are important to patients and caregivers? How does search impact the site visitor journey to each competitor website.

By the time you finish reading this post, you will have the know-how to do a competitive analysis for a health-system or hospital website with a focus on particular health specialties and demographics. You will be able to see how your website measures against the competition at the specialty level and also in meeting the needs of specific patient and caregiver audiences.

What is competitive analysis?

As we discussed in Competitive Analysis on a Budget, competitive analysis is a user experience research technique that can help you see how your site compares with competitor websites in terms of content, design, and functionality. It can also lead to better decision-making when selecting new design and technical features for your site (e.g. search filter terms or search listing display). In this post, we’ll focus on the navigation and internal menu labels as our dimensions.

A Tale of Two Hospitals

Johns Hopkins Medicine and the University of Maryland Medical Center are two large university hospitals local to Baltimore that have centers dedicated to women and heart disease. The two centers are considered direct competitors because both offer the same service and function in the same way.

Fast Facts for Context
  • Women’s heart disease symptoms are complex and often differ from mens’ symptoms. 
  • Women suffering from heart disease may not experience any symptoms at all.
  • In 2015, the Baltimore City Health Department released a report that cited cardiovascular disease as the leading cause of death in the city.
  • According to the 2015 Maryland Vital Statistics Annual Report, approximately 1 in 4 deaths in the Baltimore Metro Area were related to heart disease.
  • National and statewide statistics confirm cardiovascular disease is the leading cause of death for men and women.
It all begins with search

Search plays a key role in how patients and caregivers, especially women, find information about health conditions and treatment. In 2013, Pew Research’s Health Online Report noted that “women [were] more likely than men to go online to figure out a possible diagnosis.” The report also noted that “77% of online health seekers say they began at a search engine such as Google, Bing, or Yahoo.”

Specific search queries will likely bring this group of site visitors to a specific page, rather than to the homepage. This means the information architecture of health system internal pages plays a key role in providing patients and caregivers with information and resources about medical conditions and services. Competitive analysis can help us understand if and how these pages are meeting patient and caregiver needs.

Keywords are key

Keyword selection drastically impacts the results that are returned during a patient and caregiver search query. To demonstrate this, let’s start with a basic keyword search to evaluate how sites are optimizing search for topics like women and heart disease. As shown below, keywords can transform the information-seeking experience for women.

Figure 1: Google search with “women heart disease baltimore md” as key words

The first figure shows the search query results for “women heart disease baltimore md.” Johns Hopkins Women’s Cardiovascular Health Center and University of Maryland Medical Center Women’s Heart Program landing pages are both listed in the search results (Figures 2 and 3).

Figure 2: Johns Hopkins Women’s Cardiovascular Health Center landing page


Figure 3: University of Maryland Medical Center Women’s Heart Health Program landing page


Figure 4: Google search with “heart disease hospital baltimore md”

Search significantly impacts patient and caregiver access to health and hospital information. Google provides results based on previous search behavior, so results may vary by browser and search history, among other factors. We tried these terms using a private session and when logged into Google and saw little to no variance.

As shown in Figure 4, using different keywords in the search query yields different search results. “Heart disease hospital baltimore md” returns Johns Hopkins Heart & Vascular Institute as one of the top search results, but University of Maryland Medical Center’s Heart and Vascular Center is not returned as a top result when logged into Google Chrome on during a private session.

This is important to note because the University of Maryland Medical Center may want to look into methods to improve search engine optimization. There are different ways to address the absence of your website or landing page, product or service at the top of the site visitor’s search results listing.

Menu hierarchy and landing pages - when alphabetization complicates user experience

If women with heart disease choose keywords like “heart disease hospital baltimore md,” and do not indicate their gender in their query, they are brought to Heart & Vascular Health landing pages for each respective health system. Both landing pages use alphabetization to organization centers and programs, Because the centers or programs dedicated to women and heart disease begin with “W,” they are situated at the bottom of the internal navigations.

This may pose a challenge to patients and caregivers entering the site from search queries that omit the word “women” (i.e. heart disease hospital baltimore md). These search query examples are not meant to represent the most common queries for people looking for information about heart disease in Baltimore; rather they demonstrate how different search queries can yield different results for people seeking this information.

Figure 5: Johns Hopkins Heart & Vascular Institute landing page


Figure 6: University of Maryland Medical Center Heart and Vascular CenterInternal Menu Labeling and Nesting

Now that we see how search impacts visitor pathways to the health system sites, let’s take a closer look at how Johns Hopkins Medicine and the University of Maryland Medical Center, differ in presenting information in the internal menus for the centers and programs dedicated to women’s heart disease and heart health.

Figure 7: Johns Hopkins Heart & Vascular Institute landing page navigation

Multiple internal navigations within the Johns Hopkins Heart & Vascular Institute landing page and the current placement of the Women’s Cardiovascular Health Center at the bottom of the navigation hierarchy might make it challenging for patients looking for this particular center. Since centers provide services for patients, the placement of “centers of excellence” under “clinical services” may complicate site visitors’ understanding of resources and the relationship between services and centers. These types of naming conventions should be examined more closely.

Figure 8: Johns Hopkins Heart & Vascular Institute landing page internal navigations


Figure 9: University of Maryland Medical Center Women’s Heart Health Program landing page navigations

Like its competitor, the University of Maryland Medical Center has multiple internal navigations, which may also be cumbersome to users. Patients and caregivers have too many options which may make it difficult for them to understand what they should do on this page. It may also make it challenging for them to complete key tasks (i.e. researching risk factors, find a physician, schedule an appointment, etc).

The University of Maryland Medical Center’s “Centers and Services might resonate better with site visitors because they can find both Centers and Services under “Centers and Services;” Johns Hopkins Medicine’s placement of Centers of Excellence under Clinical Services could be confusing. Patients typically go to a center to receive clinical services; they don’t often go to a clinical service to find a center.

The University of Maryland Medical Center’s Heart & Vascular Center use of “Services’” for one of its navigations might not be intuitive to site visitors. “Services” plays the role of a catch-all for conditions (i.e. aortic disease), topics (i.e. women’s heart health) and treatment options (i.e. heart and lung transplant) and may make it challenging for visitors to find what they are looking for on this page.

More specifically, a patient or caregiver looking for women’s heart health may not necessarily expect to find a program under “Services.” These items could be surfaced more quickly and more efficiently organized within Centers and Services so that the pathways to Women’s Heart Health are more intuitive to patients and their caregivers.

We’ll know if this is the case after we test these health system site pages with real visitors.

Figure 10: Competitive analysis matrixIn sum

So how do you design a website for women who may have asymptomatic heart disease? How do you integrate the needs of potential patients who experience neck and back pain as a symptom of their heart disease? We can gain a better understanding of specific cases like this by understanding the user journey of patients who exhibit non-traditional symptoms of heart disease and their caregivers by conducting competitive usability tests of these sites.

So what next?

Now that we’ve provided a cursory analysis and heuristic evaluation of the internal navigations of two health system sites, we’ll perform user tests on the websites to validate the some of the hypotheses we discuss in this blog post and compare the content and design of the two health system sites. Keep an eye out for that post in a couple weeks!

We want to make your project a success.

Let's Chat.
Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: week 99 in Stretch cycle

Planet Debian - Tue, 2017-03-21 14:44

Here's what happened in the Reproducible Builds effort between Sunday March 12 and Saturday March 18 2017:

Upcoming events Reproducible Builds Hackathon Hamburg 2017

The Reproducible Builds Hamburg Hackathon 2017, or RB-HH-2017 for short is a 3 day hacking event taking place May 5th-7th in the CCC Hamburg Hackerspace located inside Frappant, as collective art space located in a historical monument in Hamburg, Germany.

The aim of the hackathon is to spent some days working on Reproducible Builds in every distribution and project. The event is open to anybody interested on working on Reproducible Builds issues, with or without prior experience!

Accomodation is available and travel sponsorship may be available by agreement. Please register your interest as soon as possible.

Reproducible Builds Summit Berlin 2016

This is just a quick note, that all the pads we've written during the Berlin summit in December 2016 are now online (thanks to Holger), nicely complementing the report by Aspiration Tech.

Request For Comments for new specification: BUILD_PATH_PREFIX_MAP

Ximin Luo posted a draft version of our BUILD_PATH_PREFIX_MAP specification for passing build-time paths between high-level and low-level build tools. This is meant to help eliminate irreproducibility caused by different paths being used at build time. At the time of writing, this affects an estimated 15-20% of 25000 Debian packages.

This is a continuation of an older proposal SOURCE_PREFIX_MAP, which has been updated based on feedback on our patches from GCC upstream, attendees of our Berlin 2016 summit, and participants on our mailing list. Thanks to everyone that contributed!

The specification also contains runnable source code examples and test cases; see our git repo.

Please comment on this draft ASAP - we plan to release version 1.0 of this in a few weeks.

Toolchain changes
  • #857632 apt: ignore the currently running kernel if attempting a reproducible build (Chris Lamb)
  • #857803 shadow: Make the sp_lstchg shadow field reproducible. (Chris Lamb)
  • #857892 fontconfig: please make the cache files reproducible (Chris Lamb)
Packages reviewed and fixed, and bugs filed

Chris Lamb:

Reviews of unreproducible packages

5 package reviews have been added, 274 have been updated and 800 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (5)
  • Mattia Rizzolo (1)
diffoscope development

diffoscope 79 and 80 were uploaded to experimental by Chris Lamb. It included contributions from:

Chris Lamb:

  • Ensure that we really are using ImageMagick. (Closes: #857940)
  • Extract SquashFS images in one go rather than per-file, speeding up (eg.) Tails ISO comparison by ~10x.
  • Support newer versions of cbfstool to avoid test failures. (Closes: #856446)
  • Skip icc test that varies on endian if the Debian-specific patch is not present. (Closes: #856447)
  • Compare GIF images using gifbuild. (Closes: #857610)
  • Various other code quality, build and UI improvements.

Maria Glukhova:

  • Improve AndroidManifest.xml comparison for APK files. (Closes: #850758)
strip-nondeterminism development

strip-nondeterminism 0.032-1 was uploaded to unstable by Chris Lamb. It included contributions from:

Chris Lamb:

  • Fix a possible endless loop while stripping ar files due to trusting the file's file size data. Thanks to Tobias Stoeckmann for the report, patch and testcase. (Closes: #857975)
  • Add support for testing files we should reject.
tests.reproducible-builds.org Misc.

This week's edition was written by Ximin Luo, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets

Tanguy Ortolo: Bad support of ZIP archives with extra fields

Planet Debian - Tue, 2017-03-21 14:33

For sharing multiple files, it is often convenient to pack them into an archive, and the most widely supported format to do so is probably ZIP. Under *nix, you can archive a directory with Info-ZIP:

% zip -r something.zip something/

(When you have several files, it is recommended to archive them in a directory, to avoid cluttering the directory where people will extract them.)

Unsupported ZIP archive

Unfortunately, while we would expect ZIP files to be widely supported, I found out that this is not always the case, and I had many recipients failing to open them under operating systems such as iOS.

Avoid extra fields

That issue seems to be linked to the usage of extra file attributes, that are enabled by default, in order to store Unix file metadata. The field designed to store such extra attributes was designed from the beginning so each implementation can take into account attributes it supports and ignore any other ones, but some buggy ZIP implementation appear not to function at all with them.

Therefore, unless you actually need to preserve Unix file metadata, you should avoid using extra fields. With Info-ZIP, you would have to add the option -X:

% zip -rX something.zip something/
Categories: FLOSS Project Planets

Code Enigma: Do you really need composer in production?

Planet Drupal - Tue, 2017-03-21 12:26
Do you really need composer in production? Language English Return to Blog Do you really need composer in production?

It is now a common practice to use composer as part of the deployment stack. Is this always such a good idea?

Tue, 2017-03-21 16:26By pascal

The recipe goes like this : gitignore your "vendor" directory (or whatever folder your dependencies end up in), but commit your composer.lock file, then deploy. Your CI job will then « composer install » all the dependencies where there belong to, magically reproducing your initial files layout exactly how they were.

There are generally a few additional steps involved in between though. Typically, you lost half a day figuring out the right file permissions so that the var/cache of your app can be cleared and recreated properly by the webserver user, wondered for days why some of the builds were randomly failing before realizing that no token was set in this given job, meaning github API rate limit was sometime hit. Then another good day or two finding out how to apply two patches for the same projects when they slightly conflict. And your sysadmin might be slightly suspicious about those files being downloaded and executed directly on production outside of any VCS, and anxiously watching for exploit reports.

Now, you will assure me, you’ve nailed all that, and, apart from an occasional network glitch preventing packages to be fetched, all is running smoothly. Great.

But please, re-read above. Why are you doing all this? To "reproduce your initial files layout exactly how they were". Then why don’t you just commit the files and push them, then?

That is normally the point in the discussion when you’re supposed to use the words « reproducible » and « best practices ».



Right, but what is more reproducible than moving prebuilt files around? You played the recipe once already on your dev environment, taking the risk of re-running it on production feels a bit like recompiling binaries from source simply because you can.

Composer is not magic. What it does is grab a bunch of PHP files, ensuring they are at the right version and that they end up in the right place, so they can play nicely together. Once you have the resulting file set already, why would you want to redo this over and over on each and every environment?

Best practices

Let’s have a closer looks at what is stated in the Composer documentation and the reasons why the project recommends not committing your dependencies:

  • large VCS repository size and diffs when you update code;
  • duplication of the history of all your dependencies in your own VCS; and
  • adding dependencies installed via git to a git repo will show them as submodules.

I’ll just ignore the repository size (because, frankly ?) and focus on the diff and history parts.

For one, the argument here is slightly misleading: reading the statement, you might be under the impression your repository will contain the whole git history of each and every dependency in your project. Nope, it will not. What you will end up with, along the life of your project, is the history of updates to your dependencies after your initial commit.

Which is rather the most important point here. This is a good thing! Why would not you want to be able to look at - and keep track of in your VCS - what was in update from Guzzle 3.8.0 to 3.8.1 or what difference there is between ctools 8.x-3.0-alpha27 and alpha26? Your « live » project is not only your custom code.

What would you find most useful, next time your client opens a ticket because the image embedding in the WYSIWYG editor has stopped working since the last release, when looking back at the commit « Upgrade contrib module media_entity from 8.x-1.5 to 8.x-1.6 » ? Seeing a one line hash change in composer.lock, or seeing a nice diff of the actual changes in code, so you can track down what went wrong?

The .git submodules point is fair, but easy to workaround, as explained on this very same best practices page. Also keep in mind it only applies if you use dev versions or obscure non-packaged dependencies.



So, to sum it up, if you use Composer to build your code in production, you get:

   - Un-needed and time consuming deployment complexity increase, with small but real risks of failure on each and every build for external cause

   - No auditing of changes that are not your own custom code

   + Easier handling of .git « false » submodules for a few dev dependencies


On the other hand, if you commit the "vendor" directory, you get:

   + Easier and straightforward deployment

   + All code that lands on production gets audited/versioned

   - Small amount of work involved in dealing with possible .git « false » submodules



Then why ?


But then, why is that such a widespread practice? I can only guess here, but I suspect there are several factors at play:

  • Fashion, to some extent, must play a role. There are very good reasons to do this for certain workflows, which may lead people to think that it can apply to any deployment workflow.
  • The fact that it is presented as « best practice » on the Composer project page. Many people apply it without questioning whether it is applicable to their use case.

My interpretation is that, more fundamentally, the root cause is confusion between "deploying" code and "distributing" code.

Moving a « living thing » for one environment over to another environment is not the same process as making a component or app available for other projects to reuse and build upon. Composer is a fantastic building tool, it is great for the latter case, and using it to assemble your project totally makes sense. Using it as a deployment tool, less so.

If we take another look at the arguments above from a distribution perspective, the analysis is totally different:

  • Large VCS repository size and diffs when you update code.
  • Duplication of the history of all your dependencies in your own VCS.

Indeed, in this use case, it all makes total sense: you definitively do not need the whole git history of any component you are re-using for your project. Nor do you want your repo for the nice web-crawler library you contribute on GitHub to contain the Guzzle codebase you depend upon.

In short, think about the usage. If you maintain, say, a Drupal custom distro that you use internally as a starting point for your projects, by all means yes, ignore the vendor directory. Build it with Composer when you use it to start a new project. And continue to use Composer to manage dependencies updates in your dev environment. However, once this is no longer a re-usable component, but instead a living project that will need to be deployed from environment to environment, do yourself a favour and consider carefully whether using Composer to deploy really brings any benefit.




BlogIntegrating Drupal with Microsoft SharePoint 2013 BlogIntroducing Blackfire on Code Enigma servers BlogThe Entity Reference Autocomplete module BlogSAML ADFS authentication in Drupal
Categories: FLOSS Project Planets

myDropWizard.com: Most common Drupal site building pitfalls and how to avoid them! (Part 2 of 3)

Planet Drupal - Tue, 2017-03-21 12:24

This is the second in a series of articles, in which I'd like to share the most common pitfalls we've seen, so that you can avoid making the same mistakes when building your sites!

myDropWizard offers support and maintenance for Drupal sites that we didn't build initially. We've learned the hard way which site building mistakes have the greatest potential for creating issues later.

And we've seen a lot of sites! Besides our clients, we also do a FREE in-depth site audit as the first step when talking to a potential client, so we've seen loads of additional sites that didn't become customers.

In the last article, we looked at security updates, badly installed module code and issues with patching modules, as well as specific strategies for addressing each of those problems. In this article, we'll look at how to do the most common Drupal customizations without patching!

NOTE: even though they might take a slightly different form depending on the version, most of these same pitfalls apply equally to Drupal 6, 7 and 8! It turns out that bad practices are quite compatible with multiple Drupal versions ;-)

Categories: FLOSS Project Planets

Colm O hEigeartaigh: Using OCSP with WS-Security in Apache CXF

Planet Apache - Tue, 2017-03-21 11:32
The OCSP (Online Certificate Status Protocol) is a http-based protocol to check whether a given X.509 certificate is revoked or not. It is supported in Apache CXF when TLS is used to secure communication between a web service client and server. However, it is also possible to use with a SOAP request secured with WS-Security. When the client signs a portion of the SOAP request using XML digital signature, then the service can be configured to check whether the certificate in question is revoked or not via OCSP. We will cover some simple test-cases in this post that show how this can be done.

The test-code is available on github here:
  • cxf-ocsp: This project contains a number of tests that show how a CXF service can validate client certificates using OCSP.
The project contains two separate test-classes for WS-Security in particular. Both are for a simple "double it" SOAP web service invocation using Apache CXF. The clients are configured with CXF's WSS4JOutInterceptor, to encrypt and sign the SOAP Body using credentials contained in keystores. For signature, the signing certificate is included in the security header of the request. On the receiving side, the services are configured to validate the signature and to decrypt the request. In particular, the property "enableRevocation" is set to "true" to enable revocation checking.

The first test, WSSecurityOCSPTest, is a conventional test of the OCSP functionality. Two Java security properties are set in the test-code to enable OCSP (the server runs in the same process as the client):
  • "ocsp.responderURL": The URL of the OCSP service
  • "ocsp.enable": "true" to enable OCSP
The first property is required if the client certificate does not contain the URL of the OCSP service in a certificate extension. Before running the test, install openssl and run the following command from the "openssl" directory included in the project (use the passphrase "security"):
  • openssl ocsp -index ca.db.index -port 12345 -text -rkey wss40CAKey.pem -CA wss40CA.pem -rsigner wss40CA.pem
Now run the test (e.g.  mvn test -Dtest=WSSecurityOCSPTest). In the openssl console window you should see the OCSP request data.

The second test, WSSecurityOCSPCertTest, tests the scenario where the OCSP service signs the response with a different certificate to that of the issuer of the client certificate. Under ordinary circumstances, OCSP revocation checking will fail, and indeed this is tested in the test above. However it's also possible to support this scenario, by adding the OCSP certificate to the service truststore (this is already done in the test), and to set the following additional security properties:
  • "ocsp.responderCertIssuerName": DN of the issuer of the cert
  • "ocsp.responderCertSerialNumber": Serial number of the cert
Launch Openssl from the "openssl" directory included in the project:
  • openssl ocsp -index ca.db.index -port 12345 -text -rkey wss40key.pem -CA wss40CA.pem -rsigner wss40.pem
and run the test via "mvn test -Dtest=WSSecurityOCSPCertTest".
Categories: FLOSS Project Planets

InternetDevels: Great responsive Drupal themes for construction websites: built for builders!

Planet Drupal - Tue, 2017-03-21 10:12

We once offered you a selection of free responsive Drupal themes,
as well as advanced tutorials on creating themes and subthemes.
Today, our focus will be very specific: we will discuss

Read more
Categories: FLOSS Project Planets

InternetDevels: Great responsive Drupal themes for construction websites: built for builders!

Planet Drupal - Tue, 2017-03-21 10:12

We once offered you a selection of free responsive Drupal themes,
as well as advanced tutorials on creating themes and subthemes.
Today, our focus will be very specific: we will discuss

Read more
Categories: FLOSS Project Planets

Acquia Developer Center Blog: Cultivating Open Source and Drupal in China

Planet Drupal - Tue, 2017-03-21 09:18

Following on from my previous blog posts around how Drupal and open-source are growing in China, we must start looking at how the overall ecosystem can be nurtured to turn one of the most populous countries in the world on to Drupal.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Kdenlive café #15 tonight

Planet KDE - Tue, 2017-03-21 08:46

Kdenlive development might look a bit slow these last months, but we are very busy behind the scene. You can join us tonight on our monthly café to get an insight of the current developments, follow the discussions or ask your questions.

Café will be at 21pm, european time, on irc.freenode.net, channel #kdenlive

More news on the next releases will follow soon, so keep tuned.

Categories: FLOSS Project Planets

PyCharm: Inside the Debugger: Interview with Elizaveta Shashkova

Planet Python - Tue, 2017-03-21 08:30

PyCharm 2017.1 has several notable improvements, but there’s one that’s particularly fun to talk about: debugger speedups. PyCharm’s visual debugger regularly gets top billing as a feature our customer value the highest. Over the last year, the debugger saw a number of feature improvements and several very impressive speedups. In particular, for Python 3.6 projects, PyCharm can use a new Python API to close the gap with a non-debug run configuration.

If you’ve been to PyCon or EuroPython and come by our booth, chances are you’ve seen Elizaveta Shashkova talking about the debugger to a PyCharm user, or giving a conference talk. Let’s talk to Liza about her work on PyCharm, the debugger, and her upcoming talk at PyCon.

Can you share with us a bit of your background and what you do on the PyCharm development team?

I started my career at JetBrains as a Summer Intern two and half years ago – I implemented a debugger for Jinja2 templates and Dmitry Trofimov (the creator of the PyCharm’s debugger) was my mentor. After that, I joined the PyCharm Team as a Junior developer and implemented a Thread Concurrency Visualizer under the supervision of Andrey Vlasovskikh, and my graduation thesis was based on it.

At the moment I’m supporting the Debugger and the Console in PyCharm.

People really like PyCharm’s debugger. Can you describe how it works, behind the scenes?

The debugger consists of two main parts: the UI (written in Java and Kotlin) and the Python debugger (written in Python). The most interesting part is on the Python side – the pydevd module, which we share with PyDev (the Python plugin for Eclipse).

We don’t use the pdb standard debugger for Python, but we implement our own debugger. At first glance, it’s quite simple, because it’s based on the standard sys.settrace() function, and in fact it just handles events which the Python interpreter produces for every line in the running script.

Of course, there are a lot of interesting frameworks or Python modules, where debugging doesn’t work by default, that’s why we add special support inside the debugger: for PyQt threads, for interactive mode in pyplot, for creation new processes, for debugging in Docker and others.

Note: A year ago we did an interview with the creator of PyDev about the funded debugger speedups.

The Cython extensions gave a big speedup. Can you explain how it works and the performance benefit?

Yes, Cython speedups were implemented by Fabio Zadrozny and they gave a 40% speed improvement for the debugger. Fabio found the most significant debugger bottlenecks and optimized them. Some of them were rewritten in Cython and it gave even more – a 140% speed improvement. Fabio has done a really great job!

On to new stuff. The upcoming PyCharm, paired with Python 3.6, gives some more debugger speedups, right?

Yes, as I’ve already mentioned, for Python interpreters before version 3.6 we used to use the standard Python tracing function. But in Python 3.6 the new frame evaluation API was introduced and it gave us a great opportunity to avoid using tracing functions and instead implement a new mechanism for debugging.

And it gave us a really significant performance improvement, for example, in some cases the debugger has become 80 times faster than it used to be, and it has become at least 10 times faster in the worst case. In some partial cases, it has become almost as fast as running without debugging.

We have so many users’ reports about debugger’s slowness, and now I hope they will be happy to try the new fast version of the debugger. Unfortunately, it’s available for Python 3.6 only.

What changed in Python 3.6 to allow this?

The new frame evaluation API was introduced to CPython in PEP 523 and it allows to specify a per-interpreter function pointer to handle the evaluation of frames.

In other words, it means that we can get access to the code when entering a new frame, but before the execution of this frame has started. And this means that we can modify the frame’s code and introduce our breakpoints right into bytecode: execution of the frame hasn’t started yet so we won’t break anything.

When we used the tracing function, the idea was similar: when entering a new frame we checked if there are any breakpoints in the current frame and, if they exist, continue tracing for every line in the frame. And sometimes it led to the serious performance problems.

But in the new frame evaluation debugger, this problem was solved: we just introduce the breakpoint into the code and the other lines in the scope don’t matter. Instead of adding an additional call to the tracing function for every line in the frame, with Python 3.6 we add just one call to “breakpoint”, and that means that the script under debugging runs almost as fast as without the debugger.

Congratulations on your Python 3.6 Debugging talk being accepted for PyCon. Who will be interested in your talk?

This talk will be interesting for people who want to learn something new about the features of Python 3.6. Also, it will be useful for people who want to learn yet another reason to move to the Python 3.6: a fast debugger, which should appear in many Python IDEs.

Moreover, after the talk people will understand, how the PyCharm’s debugger works, and why such fast debugging wasn’t possible in the previous versions of Python.
This talk is for experienced Python developers, who aren’t afraid of calling Python’s C API functions and doing bytecode modifications.

What is the next big thing in debugging to work on in the next year?

We have a rather old and important problem in the debugger related to the evaluation and showing big objects in the debugger. This problem has existed from the beginning, but it has become really important during the last few years. I believe it gained visibility due to the increased number of scientists who use PyCharm and work with big data frames. At the moment we have some technical restrictions for implementing this feature, but we’re going to implement it in the near future.

Categories: FLOSS Project Planets

A distro-agnostic AUR: would it be useful?

Planet KDE - Tue, 2017-03-21 07:52

If you read my recent blog post, I defined a file type (*.webapp) that contains instructions to build an Electron Web App, I wrote a script to install *.webapp files (nativefier-freedesktop) and a script/wizard to build *.webapp files starting from a URL. And I published a first Web App in KDE Store / Opendesktop / Linux-Apps.

What inspire me was AUR (Arch Linux User Repository): since it’s not safe to install binaries distributed by users, AUR hosts instead instructions to automatically download sources, build an Arch package and install it. The principle of *.webapp is the same: instructions that let the users build web apps locally, eventually with custom CSS/JS to have, for example, a dark version of some famous site like YouTube.

Also, when I use KDE Neon or other distros I miss AUR a lot: on it you can find everything and install it quickly, you can find also Git versions of apps that are in the official repos in their stable release. So I thought: since now there are distro-agnostic packages, like Flatpak, Snap and AppImage, why not create the “distro-agnostic AUR”? It would work exactly like AUR but at the end of installation process it doesn’t create an Arch package but a Flatpak/Snap/AppImage one.

So a developer could distribute i.e. the Flatpak of the 1.0 stable version of his app and an user could decide to write a DAUR (“Distro-agnostic User Repository” package with the recipe to build a Flatpak using the sources from Git, so other users will be able to install the official 1.0 version in Flatpak and the development version also in Flatpak. Or maybe an user could write a recipe for Snap because he don’t like that the developer distributes only a Flatpak etc, use cases are many.

The DAUR packages could be hosted as a category in Opendesktop / Linux App / KDE Store and software centers like Discover could support them… here there is a mockup to explain what I mean:

If you like the idea, please share it to grow the interest.

Categories: FLOSS Project Planets

Python Does What?!: When you can update locals()

Planet Python - Tue, 2017-03-21 07:46
There are two built-in functions, globals and locals.  These return dicts of the contents of the global and local scope.

Locals usually refers to the contents of a function, in which case it is a one-time copy.  Updates to the dict do not change the local scope:

>>> def local_fail():
...    a = 1
...    locals()['a'] = 2
...    print 'a is', a
>>> local_fail()
a is 1

However, in the body of a class definition, locals points to the __dict__ of the class, which is mutable.

>>> class Success(object):
...    locals().update({'a': 1})
>>> Success.a
Categories: FLOSS Project Planets

Simple is Better Than Complex: Class-Based Views vs. Function-Based Views

Planet Python - Tue, 2017-03-21 07:28

If you follow my content here in the blog, you probably have already noticed that I’m a big fan of function-based views. Quite often I use them in my examples. I get asked a lot why I don’t use class-based views more frequently. So I thought about sharing my thoughts about that subject matter.

Before reading, keep that in mind: class-based views does not replace function-based views.


I’m not an old school Django developer that used it when there was only function-based views and watched the class-based views been released. But, there was a time that only function-based views existed.

I guess it didn’t take long until all sort of hacks and solutions was created to extend and reuse views, make them more generic.

There was a time that function-based generic views was a thing. They were created to address the common use cases. But the problem was that they were quite simple, and was very hard to extend or customize them (other than using configurations parameters).

To address those issues, the class-based views was created.

Now, views are always functions. Even class-based views.

When we add them to the URL conf using the View.as_view() class method, it returns a function.

Here is what the as_view method looks like:

class View: @classonlymethod def as_view(cls, **initkwargs): """Main entry point for a request-response process.""" for key in initkwargs: # Code omitted for clarity # ... def view(request, *args, **kwargs): self = cls(**initkwargs) if hasattr(self, 'get') and not hasattr(self, 'head'): self.head = self.get self.request = request self.args = args self.kwargs = kwargs return self.dispatch(request, *args, **kwargs) # Code omitted for clarity # ... return view

Parts of the code was omitted for clarity, you can see the full method on GitHub.

So if you want to explicitly call a class-based view here is what you need to do:

return MyView.as_view()(request)

To make it feel more natural, you can assign it to a variable:

view_function = MyView.as_view() return view_function(request)

The view function returned by the as_view() method is outer part of every class-based view. After called, the view pass the request to the dispatch() method, which will execute the appropriate method accordingly to the request type (GET, POST, PUT, etc).

Class-Based View Example

For example, if you created a view extending the django.views.View base class, the dispatch() method will handle the HTTP method logic. If the request is a POST, it will execute the post() method inside the view, if the request is a GET, it will execute the get() method inside the view.


from django.views import View class ContactView(View): def get(self, request): # Code block for GET request def post(self, request): # Code block for POST request


urlpatterns = [ url(r'contact/$', views.ContactView.as_view(), name='contact'), ] Function-Based View Example

In function-based views, this logic is handled with if statements:


def contact(request): if request.method == 'POST': # Code block for POST request else: # Code block for GET request (will also match PUT, HEAD, DELETE, etc)


urlpatterns = [ url(r'contact/$', views.contact, name='contact'), ]

Those are the main differences between function-based views and class-based views. Now, Django’s generic class-based views are a different story.

Generic Class-Based Views

The generic class-based-views was introduced to address the common use cases in a Web application, such as creating new objects, form handling, list views, pagination, archive views and so on.

They come in the Django core, and you can implement them from the module django.views.generic.

They are great and can speed up the development process.

Here is an overview of the available views:

Simple Generic Views
  • View
  • TemplateView
  • RedirectView
Detail Views
  • DetailView
List Views
  • ListView
Editing Views
  • FormView
  • CreateView
  • UpdateView
  • DeleteView
Date-Based Views
  • ArchiveIndexView
  • YearArchiveView
  • MonthArchiveView
  • WeekArchiveView
  • DayArchiveView
  • TodayArchiveView
  • DateDetailView

You can find more details about each implementation in the official docs: Built-in class-based views API.

I find it a little bit confusing, because the generic implementations uses a lot of mixins, so at least for me, sometimes the code flow is not very obvious.

Now here is a great resource, also from the Django Documentation, a flattened index with all the attributes and methods from each view: Class-based generic views - flattened index. I keep this one in my bookmarks.

The Different Django Views Schools

Last year I got myself a copy of the Two Scoops of Django: Best Practices for Django 1.8 book. It’s a great book. Each chapter is self-contained, so you don’t need to read the whole book in order.

In the chapter 10, Daniel and Audrey talk about the best practices for class-based views. They brought up this tip that was very interesting to read, so I thought about sharing it with you:

School of “Use all the generic views”!
This school of thought is based on the idea that since Django provides functionality to reduce your workload, why not use that functionality? We tend to belong to this school of thought, and have used it to great success, rapidly building and then maintaining a number of projects.

School of “Just use django.views.generic.View”
This school of thought is based on the idea that the base Django CBV does just enough and is ‘the True CBV, everything else is a Generic CBV’. In the past year, we’ve found this can be a really useful approach for tricky tasks for which the resource-based approach of “Use all the views” breaks down. We’ll cover some use cases for it in this chapter.

School of “Avoid them unless you’re actually subclassing views”
Jacob Kaplan-Moss says, “My general advice is to start with function views since they’re easier to read and understand, and only use CBVs where you need them. Where do you need them? Any place where you need a fair chunk of code to be reused among multiple views.”

Excerpt from “Two Scoops of Django: Best Practices for Django 1.8” - 10.4: General Tips for Django CBV, page 121.

The authors said in the book they are in the first school. Personally, I’m in the third school. But as they said, there is no consensus on best practices.

Pros and Cons

For reference, some pros and cons about function-based views and class-based views.

Pros Cons Function-Based Views
  • Simple to implement
  • Easy to read
  • Explicit code flow
  • Straightforward usage of decorators
  • Hard to extend and reuse the code
  • Handling of HTTP methods via conditional branching
Class-Based Views
  • Can be easily extended, reuse code
  • Can use O.O techniques such as mixins (multiple inheritance)
  • Handling of HTTP methods by separate class methods
  • Built-in generic class-based views
  • Harder to read
  • Implicit code flow
  • Hidden code in parent classes, mixins
  • Use of view decorators require extra import, or method override

There is no right or wrong. It all depends on the context and the needs. As I mentioned in the beginning of this post, class-based views does not replace function-based views. There are cases where function-based views are better. In other cases class-based views are better.

For example, if you are implementing a list view, and you can get it working just by subclassing the ListView and overriding the attributes. Great. Go for it.

Now, if you are performing a more complex operation, handling multiple forms at once, a function-based view will serve you better.


The reason why I use function-based views often in my post examples, is because they are way easier to read. Many readers that stumble upon my blog are beginners, just getting started. Function-based views communicate better, as the code flow is explicit.

I usually always start my views as function-based views. If I can use a generic class-based view just by overriding the attributes, I go for it. If I have some very specific needs, and it will replicate across several views, I create my own custom generic view subclassing the django.views.generic.View.

Now a general advice from the Django documentation: If you find you’re struggling to implement your view as a subclass of a generic view, then you may find it more effective to write just the code you need, using your own class-based or functional views.

Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: Drupal Logos representing National Identities

Planet Drupal - Tue, 2017-03-21 06:23
We are now very deep into our »Druplicon marathon«. After already presenting you Drupal Logos in Human and Superhuman forms, Drupal Logos as Fruits and Vegetables, Druplicons in the shapes of Animals and Drupal Logos taking part in the outdoor activities, it's now time to look at Drupal Logos representing the national identities. A sense of belonging to one nation can be very strong. National identities are therefore also present in various Druplicons. Latter mostly represent active or inactive Drupal Groups from the specific countries. These groups connect Drupalistas from that specific… READ MORE
Categories: FLOSS Project Planets
Syndicate content