FLOSS Project Planets

Red Route: Knowledge Is Dead, Long Live Learning

Planet Drupal - Fri, 2017-11-10 01:22

This article was originally posted on the Capgemini Engineering blog

There's a certain inescapable truth that people who work with technology need to face. As time goes by, the knowledge we’ve gained almost inevitably becomes obsolete. If we specialise in something, how do we deal with the fact that our specialism, which may even have been cutting edge technology that we were pioneering, eventually becomes a legacy system? As Ellen Ullman put it, "The corollary of constant change is ignorance ... we computer experts barely know what we are doing."

Front end developers are very familiar with this feeling, confronted so frequently with the dizzying pace of change in the world of JavaScript frameworks. Once upon a time, I was very proud of my ability to make CSS layouts work in IE7. Now all those tricks and hacks are little more than worthless trivia, perhaps less valuable than actual trivia. At least knowing who scored the winner in the 1973 FA Cup final might help in a pub quiz - I can't imagine that being able to prefix properties with an asterisk will ever come in handy, but it's taking up storage space in my brain. Now that CSS grid is becoming widespread, everything I've learned about floats (and even flexbox) is becoming less and less useful. There are even some people (although I'm not one of them) who would say that CSS itself no longer has value. Similarly, jQuery is already on its way to joining YUI and MooTools in the graveyard of things I used to know about, and experienced members of the Drupal community have recently been coming to terms with the fact that in order for the technology to progress, we'll have to unlearn some of our old ways.

It isn't just true for technology. London taxi drivers are finding that their hard-earned Knowledge is being made obsolete by satnav, and before too long, the skill of driving will itself have gone the way of basket weaving or being able to pilot a horse-drawn buggy - something that might still be interesting for the enthusiast, but isn’t relevant to most people’s lives.

Confronted with the unpleasant reality that our hard-learned skills are becoming outdated, what's the appropriate response? Do we follow the example of the Luddites and rage against the evolution of the machines? It's easy to fall victim to the sunk costs fallacy, and ego provides a strong temptation to hang on to our guru status, even if we're experts in an area that is no longer useful. If you're a big fish in a shrinking pond, you may need to be careful that your pond doesn't dry up entirely. Besides, do you really want to work on legacy systems? Having said that, if your legacy system is still mission-critical somewhere, and migrating away would be a big job, there's good money to be made - just ask the people working on COBOL.

I think there's a healthier way of looking at this. With the internet acting as a repository of knowledge, and calculators at our fingertips, education is evolving. There's no longer much value in memorising times tables, or knowing the date of the battle of Culloden. As my colleague Sarah Saunders has written, you're never too old to learn, but the value of learning things is greater than the value of the facts or skills that we learn - the meta-skill of learning is the really useful thing. Then again, I would say that, having done a philosophy degree.

For example, the time and effort I put into learning French and German at school doesn’t currently seem like a worthwhile investment, when I think about how frequently I use those languages. But I would never say that it was a waste of time. When I lived in Tokyo, having learned other languages definitely helped when it came to learning Japanese. Then again, these days I don’t often spend any time in Japan or with Japanese people, so the current value of that effort seems low. But do I regret spending that effort? Not a bit. It helped me to make the most of my life in Japan, and besides, it was interesting.

Two of the most compelling conference talks I've heard in the last few years touched on this theme from different directions. Andrew Clarke and Patrick Yua both emphasised the importance of focussing on the underlying principles, rather than chasing whatever the current new hotness might be. Designers and developers can learn something from Yves Saint Laurent: "Fashions fade, style is eternal".

We need to recognise that things will always keep changing. Perhaps we could help ourselves to acknowledge the impermanence of our skills by saying some kind of ceremonial goodbye to them. I have an absurd vision of a Viking funeral, where a blazing longboat sails away full of old O'Reilly books. We may not need to go that far, but we do need to remind ourselves that what we've learned has served us well, even if that knowledge is no longer directly applicable. A knowledge funeral could be an opportunity to mourn for the passing of a skill into obsolescence, and to celebrate the value of learning.

Image source: wikimedia

Tags:  learning development psychology Capgemini Drupal All tags
Categories: FLOSS Project Planets

Norbert Preining: ScalaFX: dynamic update of context menu of table rows

Planet Debian - Fri, 2017-11-10 00:16

Context menus are useful to exhibit additional functionality. For my TLCockpit program I am listing the packages, updates, and available backups in a TreeTableView. The context for each row should be different depending on the status of the content displayed.

My first try, taken from searches on the web, was to add the context menu via the rowFactory of the TreeTableView:

table.rowFactory = { p => val row = new TreeTableRow[SomeObject] {} val infoMI = new MenuItem("Info") { onAction = /* use row.item.value */ } val installMI = new MenuItem("Install") { onAction = /* use row.item.value */ } val removeMI = new MenuItem("Remove") { onAction = /* use row.item.value */ } val ctm = new ContextMenu(infoMI, installMI, removeMI) row.contextMenu = ctm row }

This worked nicely until I tried to disable/enable some items based on the status of the displayed package:

... val pkg: SomeObject = row.item.value val isInstalled: Boolean = /* determine installation status of pkg */ val installMI = new MenuItem("Install") { disable = isInstalled onAction = /* use row.item.value */ }

What I did here is just pull the shown package, get its installation status, and disable the Install context menu entry if it is already installed.

All good and fine I thought, but somehow reality was different. First there where NullPointerExceptions (rare occurrence in Scala for me), and then somehow that didn’t work out at all.

The explanation is quite simple to be found by printing something in the rowFactory function. There are only as many rows made as fit into the current screen size (plus a bit), and their content is dynamically updated when one scrolls. But the enable/disable status of the context menu entries were not properly updated.

To fix this one needs to add a callback on the displayed item, which is exposed in row.item. So the correct code is (assuming that a SomeObject has a BooleanProperty installed):

table.rowFactory = { p => val row = new TreeTableRow[SomeObject] {} val infoMI = new MenuItem("Info") { onAction = /* use row.item.value */ } val installMI = new MenuItem("Install") { onAction = /* use row.item.value */ } val removeMI = new MenuItem("Remove") { onAction = /* use row.item.value */ } val ctm = new ContextMenu(infoMI, installMI, removeMI) row.item.onChange { (_,_,newContent) => if (newContent != null) { val isInstalled: /* determine installation status from newContent */ installMI.disable = is_installed removeMI.disable = !is_installed } } row.contextMenu = ctm row }

The final output then gives me:

That’s it, the context menus are now correctly adapted to the displayed content. If there is a simpler way, please let me know.

Categories: FLOSS Project Planets

Morpht: Announcing Entity Class Formatter for Drupal 8

Planet Drupal - Thu, 2017-11-09 22:06

The Entity Class Formatter is a nifty little module which allows editors to place classes on to entities, allowing their look and feel to be altered in the theme layer or with other modules such as Look and Modifiers. It is now available for Drupal 8.

Entity Class Modifier is a humble little module, however, it does open up a lot of possibilities. The most obvious is to use the theme layer to provide styles for the class which has been applied. This makes it possible for the “designer” at design time to can some different styles to pick from. It is however, possible to use the module in a more flexible way and combine it with Modifiers and Looks.

Once a class has been defined and added to a field, a “Look Mapping” can be defined, associating the class with a set of Modifiers. The site builder or skilled editor can then go in and define any number of Modifiers which will be fired with the class.

For example - a “my-awesome-class” class could be created which is wired into a “field_my_awesome” set of Modifiers. The Modifiers could include a blue linear gradient with white text overlay with generous padding. All of this configuration happens in the UI after deploy. It is a very flexible and powerful system. The system can be configured after deployment and adapted to the styles you wish to apply.

Basic use of Entity Class Formatter

The use of the module is very easy. We can for example define our own class on the article.

The first thing we need to do is to enable the module. Once installation is complete we can go and add our custom field. In this little tutorial we will basically add the class onto the article content type. So go to Structure > Content types > Article > Manage fields and add new text field. We can name the field simply "Class" and save it. As we keep everything in default we can save it on the next page too.

 

Now the last thing we need to do to make it work is set the Entity Class on the field in Manage display page. Go to Structure > Content types > Article > Manage display and change the format to "Entity Class". There's no need to any other manipulation with field because it won't render any values which would be visible to the visitor of the page.

 

That's it! No we can go to create an article (Content > Add content > Article). Input class to our field...

... voila, class is there!

Similar but different

There are a couple of modules out there which are similar but are different enough for them not to be totally suited to our requirements.

Classy Paragraphs, available in Drupal 8, has been influential in the Paragraphs ecosystem and has opened the way for creative designs. It was intended to apply to Paragraphs only and is quite structured in the way classes are applied through a custom field type. The Entity Class Formatter module is more flexible in that it has been designed to apply to all entity types. It can also handle a variety of field types (text, select lists, entity references) and is able to adapt to the data at hand. So, Entity Class Formatter has a similar aim - it is just somewhat more generic.

Field Formatter CSS Class, available in Drupal 7, also uses a field formatter approach to applying the class to the entity. It does have more complexity than this module because it deals with several layers of nesting. The Entity Class Formatter is very simple and only applies to the parent entity of the field.

Entity Class Formatter was inspired by Classy Paragraphs (thanks) and is supported by the team at Morpht.
Categories: FLOSS Project Planets

Community Over Code: Legal Issues Around SOFTWARE FREEDOM Trademarks

Planet Apache - Thu, 2017-11-09 21:55

The Software Freedom Law Center (SFLC) recently filed with the USPTO to cancel the registered trademark SOFTWARE FREEDOM CONSERVANCY, owned by the non-profit of the same name. Both SFLC and Conservancy have a long history together with several people working for both.

Since this is a TTAB legal proceeding – not in federal court – here’s a brief review of the legal aspects of this case, from an experienced layperson.

Both SFLC and Conservancy have been granted registrations of their full names by the USPTO; thus they can use the ® symbol after their names, and registration gives them a few extra rights in the US as to how their names may be used. The obvious root issue is that both organizations share two words in their name:

  • SOFTWARE FREEDOM LAW CENTER®
  • SOFTWARE FREEDOM CONSERVANCY®

Merely having somewhat similar names isn’t a problem with trademarks, unless consumers are confused about which organization is providing specific services or products.  This similarity is the core argument in the SFLC’s petition to cancel the Conservancy’s registration. But there are a lot of questions about the petition, including that the SFLC aided Conservancy in incorporating and registering their name! It’s very strange to petition to cancel something you yourself helped to create, which is used as a defense by Conservancy.

What’s Happened So Far

The SFLC filed a Petition to Cancel with the USPTO, essentially asking to have the SOFTWARE FREEDOM CONSERVANCY registration canceled – making it as if it hadn’t been registered.  This doesn’t necessarily mean that Conservancy couldn’t use their name as a trademark, but it’s likely that if the SFLC actually won the cancellation they would try to force an effective name change of Conservancy.

Note that the current legal action has just started – the paperwork takes a while to bubble through the Trademark Trial And Appeal Board (TTAB) process. This action also only affects registrations in the US – most countries have their own trademark registration bodies and laws, which don’t necessarily change what happens outside the US. Conservancy has a currently granted registration for their name in the EU, which helps protect their rights in many EU member countries.

Evaluating the Petition To Cancel by SFLC

To make it easier to follow along, I’ve annotated a PDF of the SFLC’s Petition to Cancel with the paragraph-by-paragraph Answer to the petition.  Reading along both sides of the case – at least as it is so far – is instructive.

The SFLC starts by stating it “believes that it has been and will continue to be damaged by U.S. Trademark Registration No. 4,212,971 for the mark SOFTWARE FREEDOM CONSERVANCY, and hereby petitions to cancel the same…“, and then details some background information and specific claims.

Most of the petitioner’s claims are merely restating various facts or allegations about the trademark registrations in question, and they also go into detail about who worked where at what time. In particular, Karen M. Sandler and Bradley M. Kuhn both served as officers at the SFLC at the time that Conservancy was spun off as a separate corporation, as well as being involved in the registration of the Conservancy’s mark. It’s actually a little odd seeing some of the claims made there about details of who was where when – I don’t think many of those claims really support the petition.

Para 18 gets to the real issue: the claim “Registrant’s SOFTWARE FREEDOM CONSERVANCY Mark is confusingly similar to Petitioner’s SOFTWARE FREEDOM LAW CENTER Mark.” However, they spend time talking about the similar sounds but don’t get into as many details about the actual services provided.  Trademarks associate the mark with a specific producer of a service/product in the minds of a consumer – so it’s critical to show that the services offered under both names are actually similar – and are similar in terms of claimed goods.

Para 24 is pretty odd considering the history of how the mark was originally applied for:

24. Registrant’s (Conservancy) registration should be canceled because it consists of or comprises a mark which so resembles Petitioner’s (SFLC) previously used and registered SOFTWARE FREEDOM LAW CENTER Mark as to be likely, when used in connection with Registrant’s goods and services, to cause confusion, mistake, or deception within the meaning of 15 U.S.C § 1052(d), and to cause damage to Petitioner thereby.

This might make tortuous legal sense within the TTAB, but not any common sense given that the SFLC helped to register that mark 6 years ago!

Claims 25-32 don’t even seem to be particularly applicable – sure, they’re facts of the case, but don’t show why the registration should be canceled.

Evaluating the Defense Response by Conservancy

Following the blue text in my annotated PDF, we can see that the response is roughly: “No, you’ve got nothing valid here to bring cancellation, and here’s why”.

Many of the factual claims are admitted, but some of the basic statements are “Denied”, presumably meaning Conservancy argues to the TTAB that those petition claims aren’t correct or aren’t relevant.  In particular, the responses are in general much more direct and specific than the petition’s claims (including one factual correction), which seems to support my first thought that some of the claims are just (ahem) puffery.

The interesting part is, of course, the AFFIRMATIVE DEFENSES, where Conservancy argues that the petition is bogus and should be thrown out of the TTAB.  They include:

1. Petitioner’s claim fails to state a claim upon which relief can be granted.
1. Petitioner’s claim is barred by the doctrine of unclean hands.
2. Petitioner’s claim is barred by the doctrine of laches.
3. Petitioner’s claim is barred by the doctrine of estoppel.
4. Petitioner’s claim is barred by the doctrine of acquiescence.

While I’m not a lawyer (please note: none of this is legal advice!), the first 1. bullet point there seems pretty specific: your petition is bogus.  (And it seems bogus for practical and community reasons as well).

Separately, the defenses lay out the legal reasons why the cancellation is bogus: the SFLC helped to apply for the Conservancy’s registration in the first place! Similarly, SFLC and Conservancy promoted themselves separately without conflicts for years in the past – so it reduces any claim to complain now.

Overall, my money is with Conservancy getting this thrown out of the TTAB.  The main question is: how much effort will it take, and will the SFLC actually come talk to the Conservancy outside of the TTAB, or merely plug away in trademark court?

Time will tell; I just hope this doesn’t drag on for too long since there are more important things we can all be doing to help free and open source software grow.

Classes Of Goods (UPDATED)

When applying to register a trademark, you list the Nice classes that your products or services are provided in – automobiles, software, restaurant services, etc.  Similar marks in different classes aren’t necessarily a problem, but they aren’t necessarily OK to use by different producers either.  The claimed goods by each registrant here are very different:

  • SOFTWARE FREEDOM LAW CENTER®, registered in class 45 for “Legal services”.
  • SOFTWARE FREEDOM CONSERVANCY®, registered in class 35 for “Charitable services, namely, promoting public awareness of free, libre and open source software projects, and developing and defending the same.”, and also in class 9 for “Downloadable computer software for media file management, object-oriented software engineering, messaging, software development tools, operating system utilities, operating system emulation, inventory management, graphics modeling, Braille displays, implementation of dynamic languages, print services, browser automation, operating systems programs in the field of education, and computer operating system tools for use in embedded systems, provided freely and openly licensed use for the public good.”

Registration classes aren’t directly used in the likelihood of confusion tests, but this certainly feels like SFLC is changing their business (see their recent post about project services), and is now claiming that their rights expand beyond what their registration claims.

The post Legal Issues Around SOFTWARE FREEDOM Trademarks appeared first on Community Over Code.

Categories: FLOSS Project Planets

Thadeu Lima de Souza Cascardo: Software Freedom Strategy with Community Projects

Planet Debian - Thu, 2017-11-09 20:52

It's been some time since I last wrote. Life and work have been busy. At the same time, the world has been busy, and as I would love to write a larger post, I will try to be short here. I would love to touch on the Librem 5 and postmarketOS. In fact, I had, in a podcast in Portuguese, Papo Livre. Maybe, I'll touch a little on the latter.

Some of the inspiration for this post include:

All of those led me to understand how software freedom is under attack, in particular how copyleft in under attack. And, as I talked during FISL, though many might say that "Open Source has won", end users software freedom has not. Lots of companies have co-opted "free software" but give no software freedom to their users. They seem friends with free software, and they are. Because they want software to be free. But freedom should not be a value for software itself, it needs to be a value for people, not only companies or people who are labeled software developers, but all people.

That's why I want to stop talking about free software, and talk more about software freedom. Because I believe the latter is more clear about what we are talking about. I don't mind that we use whatever label, as long as we stablish its meaning during conversations, and set the tone to distinguish them. The thing is: free software does not software freedom make. Not by itself. As Bradley Kuhn puts it: it's not magic pixie dust.

Those who have known me for years might remember me as a person who studied free software licenses and how I valued copyleft, the GPL specifically, and how I concerned myself with topics like license compatibility and other licensing matters.

Others might remember me as a person who valued a lot about upstreaming code. Not carrying changes to software openly developed that you had not made an effort to put upstream.

I can't say I was wrong on both accounts. I still believe in those things. I still believe in the importance of copyleft and the GPL. I still value sharing your code in the commons by going upstream. But I was certaily wrong in valuing them too much. Or not giving as much or even more value to distribution efforts of getting software freedom to the users.

And it took me a while in seeing how many people also saw the GPL as a tool to get code upstream. You see that a lot in Linus' discourse about the GPL. And that is on the minds of a lot of people, who I have seen argue that copyleft is not necessary for companies to contribute code back. But that's the problem. The point is not about getting code upstream. But about assuring people have the freedom to run a modified version of the software they received on their computers. It turns out that many examples of companies who had contributed code upstream, have not delivered that freedom to their end-users, who had received a modified version of that same software, which is not free.

Bradley Kuhn also alerts us that many companies have been replacing copyleft software with non-copyleft software. And I completely agree with him that we should be writing more copyleft software that we hold copyright for, so we can enforce it. But looking at what has been happening recently in the Linux community about enforcement, even thought I still believe in enforcement as an strategy, I think we need much more than that.

And one of those strategies is delivering more free software that users may be able to install on their own computers. It's building those replacements for software that people have been using for any reason. Be it the OS they get when they buy a device, or the application they use for communication. It's not like the community is not doing it, it's just that we need to acknowledge that this is a necessary strategy to guarantee software freedom. That distribution of software that users may easily install on their computers is as much or even more valuable than developing software closer to the hacker/developer community. That doing downstream changes to free software in the effort of getting them to users is worth it. That maintaining that software stable and secure for users is a very important task.

I may be biased when talking about that, as I have been shifting from doing upstream work to downstream work and both on the recent years. But maybe that's what I needed to realize that upstreaming does not necessarily guarantees that users will get software freedom.

I believe we need to talk more about that. I have seen many people dear to me disregard that difference between the freedom of the user and the freedom of software. There is much more to talk about that, go into detail about some of those points, and I think we need to debate more. I am subscribed to the libreplanet-discuss mailing list. Come join us in discussing about software freedom there, if you want to comment on anything I brought up here.

As I promised I would, I would like to mention about postmarketOS, which is an option users have now to get some software freedom on some mobile devices. It's an effort I wanted to build myself, and I applaud the community that has developed around it and has been moving forward so quickly. And it's a good example of a balance between upstream and dowstream code that gets to deliver a better level of software freedom to users than the vendor ever would.

I wanted to write about much of the topics I brought up today, but postponed that for some time. I was motivated by recent events in the community, and I am really disappointed at some the free software players and some of the events that happened in the last few years. That got me into thinking in how we need to manifest ourselves about those issues, so people know how we feel. So here it is: I am disappointed at how the Linux Foundation handled the situation about Software Freedom Conversancy taking a case against VMWare; I am disappointed about how Software Freedom Law Center handled a trademark issue against the Software Freedom Conservancy; and I really appreciate all the work the Software Freedom Conservancy has been doing. I have supported them for the last two years, and I urge you to become a supporter too.

Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: Drupal con Vienna’ session about business

Planet Drupal - Thu, 2017-11-09 20:11
Nowadays business in a complex and dynamic environment. Because of its uncertainness, it's never too late to listen to a good lecture. If you have missed any session from DrupalCon Vienna, let us highlight some of them to you.    Co-operative Drupal: Growth & Sustainability through Worker Ownership Finn Lewis, Technical Director of Agile Collective Ltd   There is an increasing number of worker-owned Drupal companies. So there are more and more sectors looking for effective and customizable software solutions, so it's a good time to start or grow Drupal's business, which is not… READ MORE
Categories: FLOSS Project Planets

Curtis Miller: Start Getting and Working with Data with “Data Acquisition and Manipulation with Python”

Planet Python - Thu, 2017-11-09 20:00
I announce my video course Data Acquisition and Manipulation with Python.
Categories: FLOSS Project Planets

Justin Mason: Links for 2017-11-09

Planet Apache - Thu, 2017-11-09 18:58
Categories: FLOSS Project Planets

Stefan Bodewig: XMLUnit for Java 2.5.1 Released

Planet Apache - Thu, 2017-11-09 16:05

this release fixes a bug in CompareMatcher that could throw a NoSuchElementException when combined with other hamcrest matchers.

The full list of changes for XMLUnit for Java

  • Made Travis build work with OpenJDK6 again. PR #101 by @PascalSchumacher.
  • CompareMatcher's describeTo method threw an exception if the comparison yielded no differences. Issue #107.
Categories: FLOSS Project Planets

Stack Abuse: Python's os and subprocess Popen Commands

Planet Python - Thu, 2017-11-09 15:46
Introduction

Python offers several options to run external processes and interact with the operating system. However, the methods are different for Python 2 and 3. Python 2 has several methods in the os module, which are now deprecated and replaced by the subprocess module, which is the preferred option in Python 3.

Throughout this article we'll talk about the various os and subprocess methods, how to use them, how they're different from each other, on what version of Python they should be used, and even how to convert the older commands to the newer ones.

Hopefully by the end of this article you'll have a better understanding of how to call external commands from Python code and which method you should use to do it.

First up is the older os.popen* methods.

The os.popen* Methods

The os module offers four different methods that allows us to interact with the operating system (just like you would with the command line) and create a pipe to other commands. These methods I'm referring to are: popen, popen2, popen3, and popen4, all of which are described in the following sections.

The goal of each of these methods is to be able to call other programs from your Python code. This could be calling another executable, like your own compiled C++ program, or a shell command like ls or mkdir.

os.popen

The os.popen method opens a pipe from a command. This pipe allows the command to send its output to another command. The output is an open file that can be accessed by other programs.

The syntax is as follows:

os.popen(command[, mode[, bufsize]])

Here the command parameter is what you'll be executing, and its output will be available via an open file. The argument mode defines whether or not this output file is readable ('r') or writable ('w'). Appending a 'b' to the mode will open the file in binary mode. Thus, for example "rb" will produce a readable binary file object.

In order to retrieve the exit code of the command executed, you must use the close() method of the file object.

The bufsize parameter tells popen how much data to buffer, and can assume one of the following values:

  • 0 = unbuffered (default value)
  • 1 = line buffered
  • N = approximate buffer size, when N > 0; and default value, when N < 0

This method is available for Unix and Windows platforms, and has been deprecated since Python version 2.6. If you're currently using this method and want to switch to the Python 3 version, here is the equivalent subprocess version for Python 3:

Method Replaced by pipe = os.popen('cmd', 'r', bufsize) pipe = Popen('cmd', shell=True, bufsize=bufsize, stdout=PIPE).stdout pipe = os.popen('cmd', 'w', bufsize) pipe = Popen('cmd', shell=True, bufsize=bufsize, stdin=PIPE).stdin

The code below shows an example of how to use the os.popen method:

import os p = os.popen('ls -la') print(p.read())

The code above will ask the operating system to list all files in the current directory. The output of our method, which is stored in p, is an open file, which is read and printed in the last line of the code. The of this code (in the context of my current directory) result is as follows:

$ python popen_test.py total 32 drwxr-xr-x 7 scott staff 238 Nov 9 09:13 . drwxr-xr-x 29 scott staff 986 Nov 9 09:08 .. -rw-r--r-- 1 scott staff 52 Nov 9 09:13 popen2_test.py -rw-r--r-- 1 scott staff 55 Nov 9 09:14 popen3_test.py -rw-r--r-- 1 scott staff 53 Nov 9 09:14 popen4_test.py -rw-r--r-- 1 scott staff 49 Nov 9 09:13 popen_test.py -rw-r--r-- 1 scott staff 0 Nov 9 09:13 subprocess_popen_test.py os.popen2

This method is very similar to the previous one. The main difference is what the method outputs. In this case it returns two file objects, one for the stdin and another file for the stdout.

The syntax is as follows:

popen2(cmd[, mode[, bufsize]])

These arguments have the same meaning as in the previous method, os.popen.

The popen2 method is available for both the Unix and Windows platforms. However, it is found only in Python 2. Again, if you want to use the subprocess version instead (shown in more detail below), use the following instead:

Method Replaced by (child_stdin, child_stdout) = os.popen2('cmd', mode, bufsize) p = Popen('cmd', shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, close_fds=True)
(child_stdin, child_stdout) = (p.stdin, p.stdout)

The code below shows an example on how to use this method:

import os in, out = os.popen2('ls -la') print(out.read())

This code will produce the same results as shown in the first code output above. The difference here is that the output of the popen2 method consists of two files. Thus, the 2nd line of code defines two variables: in and out. In the last line, we read the output file out and print it to the console.

os.popen3

This method is very similar to the previous ones. However, the difference is that the output of the command is a set of three files: stdin, stdout, and stderr.

The syntax is:

os.popen3(cmd[, mode[, bufsize]])

where the arguments cmd, mode, and bufsize have the same specifications as in the previous methods. The method is available for Unix and Windows platforms.

Note that this method has been deprecated and the Python documentation advises us to replace the popen3 method as follows:

Method Replaced by (child_stdin,
child_stdout,
child_stderr) = os.popen3('cmd', mode, bufsize) p = Popen('cmd', shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
(child_stdin,
child_stdout,
child_stderr) = (p.stdin, p.stdout, p.stderr)

As in the previous examples, the code below will produce the same result as seen in our first example.

import os in, out, err = os.popen3('ls -la') print(out.read())

However, in this case, we have to define three files: stdin, stdout, and stderr. The list of files from our ls -la command is saved in the out file.

os.popen4

As you probably guessed, the os.popen4 method is similar to the previous methods. However, in this case, it returns only two files, one for the stdin, and another one for the stdout and the stderr.

This method is available for the Unix and Windows platforms and (surprise!) has also been deprecated since version 2.6. To replace it with the corresponding subprocess Popen call, do the following:

Method Replaced by (child_stdin, child_stdout_and_stderr) = os.popen4('cmd', mode, bufsize) p = Popen('cmd', shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
(child_stdin, child_stdout_and_stderr) = (p.stdin, p.stdout)

The following code will produce the same result as in the previous examples, which is shown in the first code output above.

import os in, out = os.popen4('ls -la') print(we.read())

As we can see from the code above, the method looks very similar to popen2. However, the out file in the program will show the combined results of both the stdout and the stderr streams.

Summary of differences

The differences between the different popen* commands all have to do with their output, which is summarized in the table below:

Method Arguments popen stdout popen2 stdin, stdout popen3 stdin, stdout, stderr popen4 stdin, stdout and stderr

In addition the popen2, popen3, and popen4 are only available in Python 2 but not in Python 3. Python 3 has available the popen method, but it is recommended to use the subprocess module instead, which we'll describe in more detail in the following section.

The susbprocess.Popen Method

The subprocess module was created with the intention of replacing several methods available in the os module, which were not considered to be very efficient. Within this module, we find the new Popen class.

The Python documentation recommends the use of Popen in advanced cases, when other methods such like subprocess.call cannot fulfill our needs. This method allows for the execution of a program as a child process. Because this is executed by the operating system as a separate process, the results are platform dependent.

The available parameters are as follows:

subprocess.Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0)

One main difference of Popen is that it is a class and not just a method. Thus, when we call subprocess.Popen, we're actually calling the constructor of the class Popen.

There are quite a few arguments in the constructor. The most important to understand is args, which contains the command for the process we want to run. It can be specified as a sequence of parameters (via an array) or as a single command string.

The second argument that is important to understand is shell, which is defaults to False. On Unix, when we need to run a command that belongs to the shell, like ls -la, we need to set shell=True.

For example, the following code will call the Unix command ls -la via a shell.

import subprocess subprocess.Popen('ls -la', shell=True)

The results can be seen in the output below:

$ python subprocess_popen_test.py total 40 drwxr-xr-x 7 scott staff 238 Nov 9 09:13 . drwxr-xr-x 29 scott staff 986 Nov 9 09:08 .. -rw-r--r-- 1 scott staff 52 Nov 9 09:13 popen2_test.py -rw-r--r-- 1 scott staff 55 Nov 9 09:14 popen3_test.py -rw-r--r-- 1 scott staff 53 Nov 9 09:14 popen4_test.py -rw-r--r-- 1 scott staff 49 Nov 9 09:13 popen_test.py -rw-r--r-- 1 scott staff 56 Nov 9 09:16 subprocess_popen_test.py

Using the following example from a Windows machine, we can see the differences of using the shell parameter more easily. Here we're opening Microsoft Excel from the shell, or as an executable program. From the shell, it is just like if we were opening Excel from a command window.

The following code will open Excel from the shell (note that we have to specify shell=True):

import subprocess subprocess.Popen("start excel", shell=True)

However, we can get the same results by calling the Excel executable. In this case we are not using the shell, so we leave it with its default value (False); but we have to specify the full path to the executable.

import subprocess subprocess.Popen("C:\Program Files (x86)\Microsoft Office\Office15\excel.exe")

In addition, when we instantiate the Popen class, we have access to several useful methods:

Method Description Popen.poll() Checks if the child process has terminated. Popen.wait() Wait for the child process to terminate. Popen.communicate() Allows to interact with the process. Popen.send_signal() Sends a signal to the child process. Popen.terminate() Stops the child process. Popen.kill() Kills a child process.

The full list can be found at the subprocess documentation. The most commonly used method here is communicate.

The communicate method allows us to read data from the standard input, and it also allows us to send data to the standard output. It returns a tuple defined as (stdoutdata, stderrdata).

For example, the following code will combine the Windows dir and sort commands.

import subprocess p1 = subprocess.Popen('dir', shell=True, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p2 = subprocess.Popen('sort /R', shell=True, stdin=p1.stdout) p1.stdout.close() out, err = p2.communicate()

In order to combine both commands, we create two subprocesses, one for the dir command and another for the sort command. Since we want to sort in reverse order, we add /R option to the sort call.

We define the stdout of process 1 as PIPE, which allows us to use the output of process 1 as the input for process 2. Then we need to close the stdout of process 1, so it can be used as input by process 2. The communication between process is achieved via the communicate method.

Running this from a Windows command shell produces the following:

> python subprocess_pipe_test.py 11/09/2017 08:52 PM 234 subprocess_pipe_test.py 11/09/2017 07:13 PM 99 subprocess_pipe_test2.py 11/09/2017 07:08 PM 66 subprocess_pipe_test3.py 11/09/2017 07:01 PM 56 subprocess_pipe_test4.py 11/09/2017 06:48 PM <DIR> .. 11/09/2017 06:48 PM <DIR> . Volume Serial Number is 2E4E-56A3 Volume in drive D is ECA Directory of D:\MyPopen 4 File(s) 455 bytes 2 Dir(s) 18,634,326,016 bytes free Wrapping up

The os methods presented a good option in the past, however, at present the subprocess module has several methods which are more powerful and efficient to use. Among the tools available is the Popen class, which can be used in more complex cases. This class also contains the communicate method, which helps us pipe together different commands for more complex functionality.

What do you use the popen* methods for, and which do you prefer? Let us know in the comments!

Categories: FLOSS Project Planets

Stefan Scherfke: Getting started with devpi

Planet Python - Thu, 2017-11-09 15:33

Recently, I wanted to (re)evaluate devpi for use in our company. I have already worked with it some years ago but—by now—forgot what exactly it can do and how to set it up.

However, the marketing on the landing page of the docs and on GitHub was not very convincing and I nearly ended up working with another product.

A short Twitter discussion later, I decided to give devpi a try and write down my findings. Maybe they can contribute to improving devpi’s docs and demonstrate how easy it is to use.

This is what I was trying to achieve with devpi:

  • Mirror and cache PyPI
  • Extend PyPI with an index for internal stable packages
  • Extend the stable index with a staging index for experimental new features
  • No free registration for users
  • Only a single, authorized user (our GitLab CI)
  • Use standard tools (pip, twine, …) as much as possible
Installation

Devpi consists of a server, a command line client and a web front-end. The meta package devpi installs them all:

$ mkvirtualenv devpi $ pip install devpi ... Installing collected packages: ... Successfully installed ... Setting up the server

Devpi uses SQLite to store its data, so we don’t need to setup a database (you can use PostgreSQL though, if you want).

When you start the devpi-server for the first time, you need to pass the --init option. You an also specify where it should store its data (the default is ~/.devpi):

$ devpi-server --serverdir=/tmp/devpi --init ...

If you don’t use the standard location, you must set the --serverdir option every time you start the server.

User management

We can now set a password for the root user and allow only root to create new users:

$ devpi-server --serverdir=/tmp/devpi --passwd root enter password for root: repeat password for root: $ devpi-server --serverdir=/tmp/devpi --restrict-modify=root --start ... starting background devpi-server at http://localhost:3141 ...

Like the --serverdir option, you must always pass --restrict-modify=root when you start the server.

Once the server is running, we can use the devpi client devpi to create an additional user named packages. Prior to that, tell the devpi client on which server we want to operate with the following commands:

$ devpi use http://localhost:3141 ... $ devpi login root password for user root: logged in 'root', credentials valid for 10.00 hours $ devpi user -c packages email=packaging@company.com password=packages user created: packages $ devpi user -l packages root Package indexes

By default, devpi creates an index called root/pypi. It serves as a proxy and cache for PyPI and you can’t upload your own packages to it.

However, devpi supports index inheritance: We can create our stable index in the packages namespace and set root/pypi as base. If we query packages/stable, devpi first searches this index and than falls back to root/pypi if it can’t find the package on the first index.

$ devpi index -c packages/stable bases=root/pypi volatile=False http://localhost:3141/packages/stable: type=stage bases=root/pypi volatile=False acl_upload=packages mirror_whitelist= pypi_whitelist=

Similarly, our staging index can inherit stable. Devpi will than search packages/staging, packages/stable and finally root/pypi for packages:

$ devpi index -c company/staging bases=company/stable volatile=True http://localhost:3141/company/staging: type=stage bases=company/stable volatile=True acl_upload=company mirror_whitelist= pypi_whitelist=

The volatile=True option lets us perform destructive actions on the index (like overriding or deleting packages).

Install packages

Let’s use our devpi to load a public package from PyPI:

$ pip install -i http://localhost:3141/company/stable click Collecting click Downloading http://localhost:3141/root/pypi/+f/5e7/a4e296b3212da/click-6.7-py2.py3-none-any.whl (71kB) Installing collected packages: click Successfully installed click-6.7

We can also configure pip to use our devpi as default index:

[global] index-url = http://localhost:3141/company/stable Upload packages

You can build and upload packages with your usual workflow. Just add devpi to your ~/.pypirc (Do not store passwords in there!):

[distutils] index-servers = devpi-stable devpi-staging [devpi-stable] repository = http://localhost:3141/packages/stable/ username = packages [devpi-staging] repository = http://localhost:3141/packages/staging/ username = packages

Now we can build some dists:

$ cd ~/Projects/simpy $ python setup.py sdist bdist_wheel ...

And upload them:

$ pip install twine ... Installing collected packages: ... Successfully installed ... $ twine upload -r devpi-stable dist/* Uploading distributions to http://localhost:3141/packages/stable/ Enter your password: Uploading simpy-3.0.10-py2.py3-none-any.whl Uploading simpy-3.0.10.tar.gz Congratulation!

You can now install your own packages from your own packages index:

$ pip install simpy Collecting simpy Downloading simpy-3.0.10-py2.py3-none-any.whl Installing collected packages: simpy Successfully installed simpy-3.0.10

Devpi’s docs may appear a bit confusing and look a little demure, but devpi itself is actually really easy to setup and use — and powerful at the same time!

Categories: FLOSS Project Planets

Acquia Developer Center Blog: Optional Config Weirdness in Drupal 8

Planet Drupal - Thu, 2017-11-09 15:24

Ah, the config system. Crown jewel of Drupal 8, amirite?

Well, yeah, it’s fantastic and flexible (as is most of Drupal). But if you have advanced use cases — such as building a system that alters config dynamically — there are traps you should know about.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Stanford Web Services Blog: BADCamp 2017: Caryl’s Training Recap

Planet Drupal - Thu, 2017-11-09 13:14

BADCamp is a delightful mix of networking and educational opportunities. Along with connecting with former acquaintances and meeting new people, I attended two really informative trainings. Here’s my recap:

Categories: FLOSS Project Planets

Plasma 5.11.3 bugfix release now in backports PPA for Artful Aardvark 17.10

Planet KDE - Thu, 2017-11-09 13:03

The 3rd bugfix update (5.11.3) of the Plasma 5.11 series is now available for users of Kubuntu Artful Aardvark 17.10 to install via our Backports PPA. This update also includes an update for Krita to 3.3.2.1.

To update, add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

Upgrade notes:

~ The Kubuntu backports PPA includes various other backported applications, so please be aware that enabling the backports PPA for the first time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to Plasma 5.11.3.

~ The PPA will also continue to receive bugfix updates to Plasma 5.11 when they become available, and further updated applications where practical.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu PPA bugs: https://bugs.launchpad.net/kubuntu-ppa

Categories: FLOSS Project Planets

Acquia Lightning Blog: Optional config weirdness in Drupal 8

Planet Drupal - Thu, 2017-11-09 12:55
Optional config weirdness in Drupal 8 phenaproxima Thu, 11/09/2017 - 12:55

This post was originally published on Medium.

Ah, the config system. Crown jewel of Drupal 8, amirite?

Well, yeah, it’s fantastic and flexible (as is most of Drupal). But if you have advanced use cases — such as building a system that alters config dynamically — there are traps you should know about.

Config is normally a fairly static thing. Your module/theme/profile (“extension” from here on out) has some configuration in its config/install sub-directory, and when the extension is installed, that config is imported. Once it’s imported, that config is owned by your site and you can change it in any way you see fit.

That’s the simplest, and most common, use case in a nutshell. Let’s talk about some other ones.

Optional config

In some extensions’ config directory, you will see an ‘optional’ directory alongside ‘install’. If you look in there, you see…some YAML files. What’s all this, then?

Optional config is normal config, but it’s treated differently. When an extension is installed, each piece of optional config it provides is analyzed, then imported only if all of its dependencies are fulfilled. A piece of config can declare, for example, that it depends on a view called ‘content’. That’d be expressed thusly in code:

dependencies: config: - views.view.content

If that piece of config is optional, then it will only be imported if, well, a view called ‘content’ exists in the system. That view might have been shipped with another extension, or maybe you created it manually. It makes no difference. As long as a view called ‘content’ is present, any optional config that depends on it will be imported as well.

Neat, yes? This comes in quite handy for something like Lightning, which allows you to create an install profile which “extends” Lightning, yet only installs some of Lightning’s components. Optional config allows us to ship config that might be imported, depending on what parts of Lightning you have chosen to install. Hooray for flexibility!

Optional config whilst installing Drupal

But wait, there’s more.

When you’re doing a full site installation (i.e., install.php or drush site-install), optional config is treated a little bit differently. In such a case, all extensions are installed as normal, but their optional config is ignored initially. Then, at the end of the installation process1, once all extensions are installed (and their default config has been imported), all2 the optional config is imported in a single batch.

I don’t think this is documented anywhere, but it can have major ramifications. Consider this piece of code — let’s say it’s part of a module called fubar, which provides some default config and some optional config. This hook will be invoked while fubar is being installed, but after its default config has been imported:

<?php /** * Implements hook_install(). */ function fubar_install() { $view = entity_load('view', 'fubar_view'); $view->setDescription('The force will be with you, always.'); $view->save(); }

fubar_view is optional config, so will entity_load() return a view entity, or will it return NULL? What do you think?

The maddening answer is it depends. It depends on when fubar is being installed. If Drupal is already installed, and you’re just adding fubar to your site, then $view will be a view entity, because the optional config will be imported before hook_install() is invoked. But if you’re installing fubar as part of a full site install — as part of an installation profile, say — $view is going to be NULL, because optional config is imported in a single batch at the end of the installation process, long after all hook_install() implementations have been invoked.

Yeah, it’s a WTF, but it’s a justifiable one: trying to resolve the dependencies of optional config during Drupal’s install process would probably have been a colossal, NP-complete nightmare.

Dynamically altering optional config

So let’s say you need to make dynamic alterations to optional config. You can’t do it in hook_install(), because you can’t be sure that the config will even exist in there. How can you do it?

The easiest thing is not to make assumptions about when the config will be available, but simply react when it is. If the optional config you’re trying to alter is an entity of some sort, then you can simply use entity hooks! Continuing our fubar example, you could add this to fubar.module:

<?php use Drupal\views\ViewEntityInterface; /** * Implements hook_ENTITY_TYPE_presave(). */ function fubar_view_presave(ViewEntityInterface $view) { if ($view->isNew() && $view->id() == 'fubar_view') { $view->setDescription('Do, or do not. There is no try.'); } }

This ensures that fubar_view will contain timeless Yoda wisdom, regardless of whether fubar_view was imported while installing Drupal. If fubar_view is created at the end of the installation process, no problem — the hook will catch it and set the description. On the other hand, if fubar_view is installed during drush pm-enable fubar, the hook will…catch it and set the description. It works either way. It’s fine to dynamically alter optional config, but don’t assume it will be available in hook_install(). Simply react to its life cycle as you would react to any other entity’s. Enjoy!

Moar things for your brain
  • hook_install() can never assume the presence of optional config…but it can assume the presence of default config (i.e., the stuff in the config/install directories). That is always imported before hook_install() is invoked.
  • In fact, you can never depend on the presence of optional config. That’s why it’s optional: it might exist, and it might not. That’s its nature! Remember this, and code defensively.
  • The config_rewrite module, though useful, can throw a monkey wrench into this. Due to the way it works, it might create config entities, even optional ones, before hook_install() is invoked. Even during the Drupal installation process. Beware! (They are, however, working to fix this.)
  • The config system is well-documented. Start here and here. This post is only one of tons of other blog posts about config in D8.
  • This blog post came about because of this Lightning issue. So hats off to Messrs. mortenson and balsama.
  • Thanks to dawehner for reviewing this post for technical accuracy.
  • “NP-complete”, as I understand it, is CompSci-ese for “unbelievably hard to solve”. Linking to the Wikipedia article makes me seem smarter than I really am.

1 The reason this happens at the end is because any number of things could be changing during installation (who knows what evil lurks in hook_install()? Not even the Shadow knows), and and trying to solve multiple dependency chains while everything is changing around you is like trying to build multiple castles on a swamp. (Only one person has ever accomplished it.) Don't think about this stuff too much, or it will melt your brain.

2 “All”, in this case, means “all the optional config with fulfilled dependencies,” not all-all.

Categories: FLOSS Project Planets

Kdenlive 17.08.3 released

Planet KDE - Thu, 2017-11-09 12:22

The last dot release of the 17.08 series is out with minor fixes. We continue focus on the refactoring branch with steady progress towards a stable release.

Fixes

  • Set a proper desktop file name to fix an icon under Wayland. Commit.
  • Sort clip zones by position instead of name. Commit.
  • Fix melt.exe finding on windows. Commit.
  • Revert “Windows: terminate KDE session on window close”. Commit.
  • Make KCrash optional. Commit.
Categories: FLOSS Project Planets

Python Engineering at Microsoft: Don Jayamanne, creator of the Python extension for Visual Studio Code, joins Microsoft

Planet Python - Thu, 2017-11-09 11:52

I'm delighted to announce that Don Jayamanne, the author of the most popular Python extension for Visual Studio Code, has joined Microsoft! Starting immediately, Microsoft will be publishing and supporting the extension. You will receive the update automatically, or visit our Visual Studio Marketplace page and click "Install".

Python has a long history at Microsoft, starting with IronPython, Python For Visual Studio, the Python SDK for Azure, Azure Notebooks as well as contributing developer time and support to CPython.

While some people like using a full-featured IDE such as Visual Studio, others prefer to have a lighter weight, editor-centric experience such as Visual Studio Code. Don has been working on the Python extension part-time for the past couple of years. We were impressed by his work and have hired him to work on it full-time, along with other members of our Python team.

What does Microsoft Python team publishing the extension mean?

For all practical purposes the transition should be transparent to you. Additionally:

  • The extension will remain open source and free
  • Development will continue to be on GitHub, under the existing license
  • More dev resources means (generally) faster turnaround on bug fixes and new features
  • Official support will be provided by Microsoft

For this first release we're focusing on fixing a number of existing issues and adding a few new features such as multi-root and status bar interpreter selection (a complete list of changes can be found in the changelog).

Note: If for whatever reason you would prefer to continue to use the extension as Don released it prior to joining Microsoft, you can uninstall the Python extension and then download the VSIX file and install it manually. Note that no further development will be done in the old GitHub repo.

We're hiring for Visual Studio Code / Python!

We're hiring devs immediately to continue and expand work on our Python support for Visual Studio Code. If you are passionate about developer tools and productivity, this could be an ideal endeavor! The ideal candidate has a mix of IDE (editor/debugger), JavaScript/TypeScript, and Python in their background, and experience writing plugins for VS Code is a big plus. If you bring something really special to the table we can consider remote, but ideally you would plan to relocate to our Redmond offices. If interested, please send your resume to pythonjobs@microsoft.com with the subject: "VSC-Python".

Categories: FLOSS Project Planets

Amazee Labs: GraphQL for Drupalers - the basics

Planet Drupal - Thu, 2017-11-09 11:25
GraphQL for Drupalers - the basics

GraphQL is becoming more and more popular every day. Now that we have a beta release of the GraphQL module (mainly sponsored and developed by Amazee Labs) it's easy to turn Drupal into a first-class GraphQL server. This is the new GraphQL series in which we'll describe the features that are new in beta and provide a detailed overview of the integration with Drupal's entity and field systems.

Blazej Owczarczyk Thu, 11/09/2017 - 17:25

This is the new GraphQL series about the new features that are in beta (published 2 weeks ago) and how they are connected with Drupal out of the box

The modules

Let's start with the modules we need. Recently there were quite a few changes in this field. In alpha we had:

  • graphql - The main module laying the foundations for exposing data using Drupal plugins.
  • graphql_core - which exposed Drupal core data - the info about entity types and bundles, but not fields
  • graphql_content - which handled field exposition with the view modes
  • other auxiliary modules (e.g., graphql_boolean graphql_entity_reference) that provided behaviours for specific field types

In beta, the structure has changed. Now the default schema exposes all the Drupal fields and (content) entities in raw form, using the typed data. Thanks to that GraphQL became a zero-configuration plug and play module. We just need to enable graphql and graphql_core (the only two modules that are shipped with the package now) and we're good to go.

NOTE: The other modules are still available in case you need them, they're just not part of the core package anymore. graphql_legacy is where most of the field modules went. Besides that, there are graphql_views  which lets us expose views, the promising graphql_twig that allows using GraphQL queries without a decoupled setup and a few more. All of the modules are listed in the drupal-graphql organization page on GitHub.

The Explorer

After enabling graphql and graphql_core we can go ahead and test it with GraphiQL; an interactive tool that lets you run queries, see results in real time and preview all the available fields along with arguments and return types. It also provides query validation, autocompletes suggestions and keyboard shortcuts. Thus, it's a kind of an IDE. The explorer is connected with the schema. We can find it next to the Default Schema under: Configuration -> Web services -> GraphQL -> Schemas or using the direct path - graphql/explorer.

This is how it looks. On the left, there is a query box with a comment describing the tool and listing the keyboard shortcuts. Results show up in the box on the right. To run the query we can use the «play» button at the top, or the keyboard shortcut (Ctrl-Enter). One more important piece is the < Docs button in the upper right corner. This is where we can see all the available elements. Let's go ahead and click it.

The only thing we can start with is the query field which is of type RootQuery. Clicking on the type shows a list of available sub-fields, including userById, which looks like this:

This field takes two arguments: an id (which is a string) and a language (which can be set to any language enabled on the site) and is of type User. Clicking on the type brings up a list of fields on a user. The name is a string:

Strings are scalars (which means they don't have subfields) so we can finish our simple query here. It will look like this:

and (after running it with Ctrl-enter) the response is what we'd expect

The GraphQL explorer (or GraphiQL) gives us the ability to easily write and debug every GraphQL query. That's a feature that's hard to overestimate so getting familiar with it is a good starting point to understanding how GraphQL works in general and how we can get our Drupal data out of it.

The Voyager

Voyager is another schema introspection tool, a visual one this time. It draws a chart of all the types and field in the system and presents it in a nice way. It can be found next to The Explorer under: Configuration -> Web services -> GraphQL -> Schemas or using the direct path - graphql/voyager.

That's it for this post. In the next one, we'll see some examples of retrieving data from Drupal fields.

 

Categories: FLOSS Project Planets

Texas Creative: Drupal 8 Custom Table of Contents Block for Book Content

Planet Drupal - Thu, 2017-11-09 11:15

Want to customize the default block that comes with the book module?  Here’s a way to do it without writing custom code by using Views and the Views Tree module.

Read More
Categories: FLOSS Project Planets
Syndicate content