FLOSS Project Planets

Abu Ashraf Masnun: Interfaces in Python: Protocols and ABCs

Planet Python - Sat, 2017-04-15 05:55

The idea of interface is really simple - it is the description of how an object behaves. An interface tells us what an object can do to play it’s role in a system. In object oriented programming, an interface is a set of publicly accessible methods on an object which can be used by other parts of the program to interact with that object. Interfaces set clear boundaries and help us organize our code better. In some langauges like Java, interfaces are part of the language syntax and strictly enforced. However, in Python, things are a little different. In this post, we will explore how interfaces can be implemented in Python.

Informal Interfaces: Protocols / Duck Typing

There’s no interface keyword in Python. The Java / C# way of using interfaces is not available here. In the dynamic language world, things are more implicit. We’re more focused on how an object behaves, rather than it’s type/class.

If it talks and walks like a duck, then it is a duck

So if we have an object that can fly and quack like a duck, we consider it as a duck. This called “Duck Typing”. In runtime, instead of checking the type of an object, we try to invoke a method we expect the object to have. If it behaves the way we expected, we’re fine and move along. But if it doesn’t, things might blow up. To be safe, we often handle the exceptions in a try..except block or use hasattr to check if an object has the specific method.

In the Python world, we often hear “file like object” or “an iterable” - if an object has a read method, it can be treated as a file like object, if it has an __iter__ magic method, it is an iterable. So any object, regardless of it’s class/type, can conform to a certain interface just by implementing the expected behavior (methods). These informal interfaces are termed as protocols. Since they are informal, they can not be formally enforced. They are mostly illustrated in the documentations or defined by convention. All the cool magic methods you have heard about - __len__, __contains__, __iter__ - they all help an object to conform to some sort of protocols.

class Team: def __init__(self, members): self.__members = members def __len__(self): return len(self.__members) def __contains__(self, member): return member in self.__members justice_league_fav = Team(["batman", "wonder woman", "flash"]) # Sized protocol print(len(justice_league_fav)) # Container protocol print("batman" in justice_league_fav) print("superman" in justice_league_fav) print("cyborg" not in justice_league_fav)

In our above example, by implementing the __len__ and __contains__ method, we can now directly use the len function on a Team instance and check for membership using the in operator. If we add the __iter__ method to implement the iterable protocol, we would even be able to do something like:

for member in justice_league_fav: print(member)

Without implementing the __iter__ method, if we try to iterate over the team, we will get an error like:

TypeError: 'Team' object is not iterable

So we can see that protocols are like informal interfaces. We can implement a protocol by implementing the methods expected by it.

Formal Interfaces: ABCs

While protocols work fine in many cases, there are situations where informal interfaces or duck typing in general can cause confusion. For example, a Bird and Aeroplane both can fly(). But they are not the same thing even if they implement the same interfaces / protocols. Abstract Base Classes or ABCs can help solve this issue.

The concept behind ABCs is simple - we define base classes which are abstract in nature. We define certain methods on the base classes as abstract methods. So any objects deriving from these bases classes are forced to implement those methods. And since we’re using base classes, if we see an object has our class as a base class, we can say that this object implements the interface. That is now we can use types to tell if an object implements a certain interface. Let’s see an example.

import abc class Bird(abc.ABC): @abc.abstractmethod def fly(self): pass

There’s the abc module which has a metaclass named ABCMeta. ABCs are created from this metaclass. So we can either use it directly as the metaclass of our ABC (something like this - class Bird(metaclass=abc.ABCMeta):) or we can subclass from the abc.ABC class which has the abc.ABCMeta as it’s metaclass already.

Then we have to use the abc.abstractmethod decorator to mark our methods abstract. Now if any class derives from our base Bird class, it must implement the fly method too. The following code would fail:

class Parrot(Bird): pass p = Parrot()

We see the following error:

TypeError: Can't instantiate abstract class Parrot with abstract methods fly

Let’s fix that:

class Parrot(Bird): def fly(self): print("Flying") p = Parrot()

Also note:

>>> isinstance(p, Bird) True

Since our parrot is recognized as an instance of Bird ABC, we can be sure from it’s type that it definitely implements our desired interface.

Now let’s define another ABC named Aeroplane like this:

class Aeroplane(abc.ABC): @abc.abstractmethod def fly(self): pass class Boeing(Aeroplane): def fly(self): print("Flying!") b = Boeing()

Now if we compare:

>>> isinstance(p, Aeroplane) False >>> isinstance(b, Bird) False

We can see even though both objects have the same method fly but we can now differentiate easily which one implements the Bird interface and which implements the Aeroplane interface.

We saw how we can create our own, custom ABCs. But it is often discouraged to create custom ABCs and rather use/subclass the built in ones. The Python standard library has many useful ABCs that we can easily reuse. We can get a list of useful built in ABCs in the collections.abc module - https://docs.python.org/3/library/collections.abc.html#module-collections.abc. Before writing your own, please do check if there’s an ABC for the same purpose in the standard library.

ABCs and Virtual Subclass

We can also register a class as a virtual subclass of an ABC. In that case, even if that class doesn’t subclass our ABC, it will still be treated as a subclass of the ABC (and thus accepted to have implemented the interface). Example codes will be able to demonstrate this better:

@Bird.register class Robin: pass r = Robin()

And then:

>>> issubclass(Robin, Bird) True >>> isinstance(r, Bird) True >>>

In this case, even if Robin does not subclass our ABC or define the abstract method, we can register it as a Bird. issubclass and isinstance behavior can be overloaded by adding two relevant magic methods. Read more on that here - https://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass

Further reading
Categories: FLOSS Project Planets

Talk Python to Me: #107 Python concurrency with Curio

Planet Python - Sat, 2017-04-15 04:00
You have heard me go on and on about how Python 3.5's async and await changes the game for asynchronous programming in Python. But what exactly does that mean? How does it work in APIs? Internally? <br/> <br/> Today I'm here with David Beazley who has been deeply exploring this space with his project Curio. <br/> <br/> <br/> Links from the show: <br/> <div style="font-size: .85em;"> <br/> <b>Curio on GitHub</b>: <a href='https://github.com/dabeaz/curio' target='_blank'>github.com/dabeaz/curio</a> <br/> <b>David</b>: <a href='http://www.dabeaz.com/' target='_blank'>dabeaz.com</a> <br/> <b>David on Twitter</b>: <a href='https://twitter.com/dabeaz' target='_blank'>@dabeaz</a> <br/> <b>Ground up Concurrency Talk</b>: <a href='https://www.youtube.com/watch?v=MCs5OvhV9S4' target='_blank'>youtube.com/watch?v=MCs5OvhV9S4</a> <br/> <b>PeeWee ORM Async</b>: <a href='https://github.com/05bit/peewee-async' target='_blank'>github.com/05bit/peewee-async</a> <br/> <br/> <strong>Sponsored links</strong> <br/> <b>Rollbar</b>: <a href='https://rollbar.com/talkpythontome' target='_blank'>rollbar.com/talkpythontome</a> <br/> <b>Hired</b>: <a href='https://hired.com/?utm_source=podcast&utm_medium=talkpythontome&utm_term=cat-tech-software&utm_content=2k&utm_campaign=q1-16-episodesbanner' target='_blank'>hired.com</a> <br/> <b>Talk Python Courses</b>: <a href='https://training.talkpython.fm/' target='_blank'>training.talkpython.fm</a> <br/> </div>
Categories: FLOSS Project Planets

Justin Mayer: Python Development Environment on macOS Sierra and El Capitan

Planet Python - Sat, 2017-04-15 02:00

While installing Python and Virtualenv on macOS Sierra can be done several ways, this tutorial will guide you through the process of configuring a stock Mac system into a solid Python development environment.

First steps

This guide assumes that you have already installed Homebrew. For details, please follow the steps in the macOS Configuration Guide.

Python

We are going to install the latest 2.7.x version of Python via Homebrew. Why bother, you ask, when Apple includes Python along with macOS? Here are some reasons:

  • When using the bundled Python, macOS updates can nuke your Python packages, forcing you to re-install them.
  • As new versions of Python are released, the Python bundled with macOS will become out-of-date. Homebrew always has the most recent version.
  • Apple has made significant changes to its bundled Python, potentially resulting in hidden bugs.
  • Homebrew’s Python includes the latest versions of Pip and Setuptools (Python package management tools)

Along the same lines, the version of OpenSSL that comes with macOS is out-of-date, so we’re going to tell Homebrew to download the latest OpenSSL and compile Python with it.

Use the following command to install Python via Homebrew:

brew install python

You’ve already modified your PATH as mentioned in the macOS Configuration Guide, right? If not, please do so now.

Optionally, we can also install Python 3.x alongside Python 2.x:

brew install python3

… which makes it easy to test your code on both Python 2.x and Python 3.x.

Pip

Let’s say you want to install a Python package, such as the fantastic Virtualenv environment isolation tool. While nearly every Python-related article for macOS tells the reader to install it via sudo pip install virtualenv, the downsides of this method include:

  1. installs with root permissions
  2. installs into the system /Library
  3. yields a less reliable environment when using Homebrew’s Python

As you might have guessed by now, we’re going to use the tools provided by Homebrew to install the Python packages that we want to be globally available. When installing via Homebrew Python’s pip, packages will be installed to /usr/local/lib/python2.7/site-packages, with binaries placed in /usr/local/bin.

Version control (optional)

The first thing I pip-install is Mercurial, since I have Mercurial repositories that I push to both Bitbucket and GitHub. If you don’t want to install Mercurial, you can skip ahead to the next section.

The following command will install Mercurial and hg-git:

pip install Mercurial hg-git

At a minimum, you’ll need to add a few lines to your .hgrc file in order to use Mercurial:

vim ~/.hgrc

The following lines should get you started; just be sure to change the values to your name and email address, respectively:

[ui] username = YOUR NAME <address@example.com>

To test whether Mercurial is configured and ready for use, run the following command:

hg debuginstall

If the last line in the response is “no problems detected”, then Mercurial has been installed and configured properly.

Virtualenv

Python packages installed via the steps above are global in the sense that they are available across all of your projects. That can be convenient at times, but it can also create problems. For example, sometimes one project needs the latest version of Django, while another project needs an older Django version to retain compatibility with a critical third-party extension. This is one of many use cases that Virtualenv was designed to solve. On my systems, only a handful of general-purpose Python packages (such as Mercurial and Virtualenv are globally available — every other package is confined to virtual environments.

With that explanation behind us, let’s install Virtualenv:

pip install virtualenv

Create some directories to store our projects and virtual environments, respectively:

mkdir -p ~/Projects ~/Virtualenvs

We’ll then open Pip’s configuration file (which may be created if it doesn’t exist yet)…

vim ~/Library/Application\ Support/pip/pip.conf

… and add some lines to it:

[install] require-virtualenv = true [uninstall] require-virtualenv = true

Now we have Virtualenv installed and ready to create new virtual environments, which we will store in ~/Virtualenvs. New virtual environments can be created via:

cd ~/Virtualenvs virtualenv foobar

If you have both Python 2.x and 3.x and want to create a Python 3.x virtualenv:

virtualenv -p python3 foobar-py3

… which makes it easier to switch between Python 2.x and 3.x foobar environments.

Restricting Pip to virtual environments

What happens if we think we are working in an active virtual environment, but there actually is no virtual environment active, and we install something via pip install foobar? Well, in that case the foobar package gets installed into our global site-packages, defeating the purpose of our virtual environment isolation.

In an effort to avoid mistakenly Pip-installing a project-specific package into my global site-packages, I previously used easy_install for global packages and the virtualenv-bundled Pip for installing packages into virtual environments. That accomplished the isolation objective, since Pip was only available from within virtual environments, making it impossible for me to pip install foobar into my global site-packages by mistake. But easy_install has some deficiencies, such as the inability to uninstall a package, and I found myself wanting to use Pip for both global and virtual environment packages.

Thankfully, Pip has an undocumented setting (source) that tells it to bail out if there is no active virtual environment, which is exactly what I want. In fact, we’ve already set that above, via the require-virtualenv = true directive in Pip’s configuration file. For example, let’s see what happens when we try to install a package in the absence of an activated virtual environment:

$ pip install markdown Could not find an activated virtualenv (required).

Perfect! But once that option is set, how do we install or upgrade a global package? We can temporarily turn off this restriction by defining a new function in ~/.bashrc:

gpip(){ PIP_REQUIRE_VIRTUALENV="" pip "$@" }

(As usual, after adding the above you must run source ~/.bash_profile for the change to take effect.)

If in the future we want to upgrade our global packages, the above function enables us to do so via:

gpip install --upgrade pip setuptools wheel virtualenv

You could achieve the same effect via PIP_REQUIRE_VIRTUALENV="" pip install --upgrade foobar, but that’s much more cumbersome to type.

Creating virtual environments

Let’s create a virtual environment for Pelican, a Python-based static site generator:

cd ~/Virtualenvs virtualenv pelican

Change to the new environment and activate it via:

cd pelican source bin/activate

To install Pelican into the virtual environment, we’ll use pip:

pip install pelican markdown

For more information about virtual environments, read the Virtualenv docs.

Dotfiles

These are obviously just the basic steps to getting a Python development environment configured. Feel free to also check out my dotfiles (GitHub mirror).

If you found this article to be useful, please follow me on Twitter. Also, if you are interested in server security monitoring, be sure to sign up for early access to Monitorial!

Categories: FLOSS Project Planets

Gunnar Wolf: On Dmitry Bogatov and empowering privacy-protecting tools

Planet Debian - Sat, 2017-04-15 00:53

There is a thorny topic we have been discussing in nonpublic channels (say, the debian-private mailing list... It is impossible to call it a private list if it has close to a thousand subscribers, but it sometimes deals with sensitive material) for the last week. We have finally confirmation that we can bring this topic out to the open, and I expect several Debian people to talk about this. Besides, this information is now repeated all over the public Internet, so I'm not revealing anything sensitive. Oh, and there is a statement regarding Dmitry Bogatov published by the Tor project — But I'll get to Tor soon.

One week ago, the 25-year old mathematician and Debian Maintainer Dmitry Bogatov was arrested, accused of organizing riots and calling for terrorist activities. Every evidence so far points to the fact that Dmitry is not guilty of what he is charged of — He was filmed at different places at the times where the calls for terrorism happened.

It seems that Dmitry was arrested because he runs a Tor exit node. I don't know the current situation in Russia, nor his political leanings — But I do know what a Tor exit node looks like. I even had one at home for a short while.

What is Tor? It is a network overlay, meant for people to hide where they come from or who they are. Why? There are many reasons — Uninformed people will talk about the evil wrongdoers (starting the list of course with the drug sellers or child porn distributors). People who have taken their time to understand what this is about will rather talk about people for whom free speech is not a given; journalists, political activists, whistleblowers. And also, about regular people — Many among us have taken the habit of doing some of our Web surfing using Tor (probably via the very fine and interesting TAILS distribution — The Amnesiac Incognito Live System), just to increase the entropy, and just because we can, because we want to preserve the freedom to be anonymous before it's taken away from us.

There are many types of nodes in Tor; most of them are just regular users or bridges that forward traffic, helping Tor's anonymization. Exit nodes, where packets leave the Tor network and enter the regular Internet, are much scarcer — Partly because they can be quite problematic to people hosting them. But, yes, Tor needs more exit nodes, not just for bandwidth sake, but because the more exit nodes there are, the harder it is for a hostile third party to monitor a sizable number of them for activity (and break the anonymization).

I am coincidentially starting a project with a group of students of my Faculty (we want to breathe life again into LIDSOL - Laboratorio de Investigación y Desarrollo de Software Libre). As we are just starting, they are documenting some technical and social aspects of the need for privacy and how Tor works; I expect them to publish their findings in El Nigromante soon (which means... what? ☺ ), but definitively, part of what we want to do is to set up a Tor exit node at the university — Well documented and with enough academic justification to avoid our network operation area ordering us to shut it down. Lets see what happens :)

Anyway, all in all — Dmitry is in for a heavy time. He has been detained pre-trial at least until June, and he faces quite serious charges. He has done a lot of good, specialized work for the whole world to benefit. So, given I cannot do more, I'm just speaking my mind here in this space.

[Update] Dmitry's case has been covered in LWN. There is also a statement concerning the arrest of Dmitry Bogatov by the Debian project. This case is also covered at The Register.

Categories: FLOSS Project Planets

Full Stack Python: The Full Stack Python Blog

Planet Python - Sat, 2017-04-15 00:00

Full Stack Python began way back in December 2012 when I started writing the initial deployment, server, operating system, web server and WSGI server pages. The site has has broadly expanded out into a many other subjects outside the deployment topics I originally started this site to explain.

However, I frequently wanted to write a Python walkthrough that was not a good fit for the page format I use for each topic. Many of those walkthroughs became Twilio blog posts but not all of them were quite the right fit on there. I'll still write more Twilio tutorials, but this Full Stack Python blog is the spot for technical posts that fall outside the Twilio domain.

Let me know what you think and what tutorials you'd like to see in the future.

Hit me up on Twitter @fullstackpython or @mattmakai.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: #5: Easy package information

Planet Debian - Fri, 2017-04-14 20:56

Welcome to the fifth post in the recklessly rambling R rants series, or R4 for short.

The third post showed an easy way to follow R development by monitoring (curated) changes on the NEWS file for the development version r-devel. As a concrete example, I mentioned that it has shown a nice new function (tools::CRAN_package_db()) coming up in R 3.4.0. Today we will build on that.

Consider the following short snippet:

library(data.table) getPkgInfo <- function() { if (exists("tools::CRAN_package_db")) { dat <- tools::CRAN_package_db() } else { tf <- tempfile() download.file("https://cloud.r-project.org/src/contrib/PACKAGES.rds", tf, quiet=TRUE) dat <- readRDS(tf) # r-devel can now readRDS off a URL too } dat <- as.data.frame(dat) setDT(dat) dat }

It defines a simple function getPkgInfo() as a wrapper around said new function from R 3.4.0, ie tools::CRAN_package_db(), and a fallback alternative using a tempfile (in the automagically cleaned R temp directory) and an explicit download and read of the underlying RDS file. As an aside, just this week the r-devel NEWS told us that such readRDS() operations can now read directly from URL connection. Very nice---as RDS is a fantastic file format when you are working in R.

Anyway, back to the RDS file! The snippet above returns a data.table object with as many rows as there are packages on CRAN, and basically all their (parsed !!) DESCRIPTION info and then some. A gold mine!

Consider this to see how many package have a dependency (in the sense of Depends, Imports or LinkingTo, but not Suggests because Suggests != Depends) on Rcpp:

R> dat <- getPkgInfo() R> rcppRevDepInd <- as.integer(tools::dependsOnPkgs("Rcpp", recursive=FALSE, installed=dat)) R> length(rcppRevDepInd) [1] 998 R>

So exciting---we will hit 1000 within days! But let's do some more analysis:

R> dat[ rcppRevDepInd, RcppRevDep := TRUE] # set to TRUE for given set R> dat[ RcppRevDep==TRUE, 1:2] Package Version 1: ABCoptim 0.14.0 2: AbsFilterGSEA 1.5 3: acc 1.3.3 4: accelerometry 2.2.5 5: acebayes 1.3.4 --- 994: yakmoR 0.1.1 995: yCrypticRNAs 0.99.2 996: yuima 1.5.9 997: zic 0.9 998: ziphsmm 1.0.4 R>

Here we index the reverse dependency using the vector we had just computed, and then that new variable to subset the data.table object. Given the aforementioned parsed information from all the DESCRIPTION files, we can learn more:

R> ## likely false entries R> dat[ RcppRevDep==TRUE, ][NeedsCompilation!="yes", c(1:2,4)] Package Version Depends 1: baitmet 1.0.0 Rcpp, erah (>= 1.0.5) 2: bea.R 1.0.1 R (>= 3.2.1), data.table 3: brms 1.6.0 R (>= 3.2.0), Rcpp (>= 0.12.0), ggplot2 (>= 2.0.0), methods 4: classifierplots 1.3.3 R (>= 3.1), ggplot2 (>= 2.2), data.table (>= 1.10), 5: ctsem 2.3.1 R (>= 3.2.0), OpenMx (>= 2.3.0), Rcpp 6: DeLorean 1.2.4 R (>= 3.0.2), Rcpp (>= 0.12.0) 7: erah 1.0.5 R (>= 2.10), Rcpp 8: GxM 1.1 NA 9: hmi 0.6.3 R (>= 3.0.0) 10: humarray 1.1 R (>= 3.2), NCmisc (>= 1.1.4), IRanges (>= 1.22.10),\nGenomicRanges (>= 1.16.4) 11: iNextPD 0.3.2 R (>= 3.1.2) 12: joinXL 1.0.1 R (>= 3.3.1) 13: mafs 0.0.2 NA 14: mlxR 3.1.0 R (>= 3.0.1), ggplot2 15: RmixmodCombi 1.0 R(>= 3.0.2), Rmixmod(>= 2.0.1), Rcpp(>= 0.8.0), methods,\ngraphics 16: rrr 1.0.0 R (>= 3.2.0) 17: UncerIn2 2.0 R (>= 3.0.0), sp, RandomFields, automap, fields, gstat R>

There are a full seventeen packages which claim to depend on Rcpp while not having any compiled code of their own. That is likely false---but I keep them in my counts, however relunctantly. A CRAN-declared Depends: is a Depends:, after all.

Another nice thing to look at is the total number of package that declare that they need compilation:

R> ## number of packages with compiled code R> dat[ , .(N=.N), by=NeedsCompilation] NeedsCompilation N 1: no 7625 2: yes 2832 3: No 1 R>

Isn't that awesome? It is 2832 out of (currently) 10458, or about 27.1%. Just over one in four. Now the 998 for Rcpp look even better as they are about 35% of all such packages. In order words, a little over one third of all packages with compiled code (which may be legacy C, Fortran or C++) use Rcpp. Wow.

Before closing, one shoutout to Dirk Schumacher whose thankr which I made the center of the last post is now on CRAN. As a mighty fine and slim micropackage without external dependencies. Neat.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

PyBites: Code Challenge 14 - Write DRY Code With Decorators - Review

Planet Python - Fri, 2017-04-14 19:00

It's end of the week again so we review the code challenge of this week. It's never late to sign up, just fork our challenges repo and start coding.

Categories: FLOSS Project Planets

Laura Arjona Reina: Underestimating Debian

Planet Debian - Fri, 2017-04-14 16:19

I had two issues in the last days that lead me a bit into panic until they got solved. In both cases the issue was external to Debian but I first thought that the problem was in Debian. I’m not sure why I had those thoughts, I should be more confident in myself, this awesome operating system, and the community around it! The good thing is that I’ll be more confident from now on, and I’ve learned that hurry is not a good friend, and I should face my computer “problems” (and everything in life, probably) with a bit more patience (and backups).

Issue 1: Corrupt ext partition in a laptop

I have a laptop at home with dual boot Windows 7 + Debian 9 (Stretch). I rarely boot the Windows partition. When I do, I do whatever I need to do/test there, then install updates, and then shutdown the laptop or reboot in Debian to feel happy again when using computers.

Some months ago I noticed that booting in Debian was not possible and I was left in an initramfs console that was suggesting to e2fsck /dev/sda6 (my Debian partition). Then I ran e2fsck, say “a” to fix all the issues found, and the system was booting properly. This issue was a bit scary-looking because of the e2fsck output making screen show random numbers and scrolling quickly for 1 or 2 minutes, until all the inodes or blocks or whatever were fixed.

I thought about the disk being faulty, and ran badblocks, but faced the former boot issue again some time after, and then decided to change the disk (then I took the opportunity to make backups, and install a fresh Debian 9 Stretch in the laptop, instead of the Debian 8 stable that was running).

The experience with Stretch has been great since then, but some days ago I faced the boot issue again. Then I realised that maybe the issue was appearing when I booted Debian right after using Windows (and this was why it was appearing not very often in my timeline

Categories: FLOSS Project Planets

Palantir: Project Management: The Musical! DrupalCon Trailer

Planet Drupal - Fri, 2017-04-14 16:10
Project Management: The Musical! DrupalCon Trailer brandt Fri, 04/14/2017 - 15:10 Alex Brandt Apr 14, 2017

Come see Project Management: The Musical! at DrupalCon Baltimore. April 25th at 2:15pm.

In this presentation we will cover...
  • How to get your project organized
  • What analytics and KPIs to review
  • How to handle scope creep
  • ...and many more facets of project management

We want to make your project a success.

Let's Chat.

Additional information about this session can be found on the DrupalCon site

 

 

Stay connected with the latest news on web strategy, design, and development.

Sign up for our newsletter.
Categories: FLOSS Project Planets

Agaric Collective: Doing links on Drupal 8

Planet Drupal - Fri, 2017-04-14 15:19

There are plenty of ways to create links when using Drupal 8 and I will share some of those ways in this post.

The easiest way to create internal links is using Link::createFromRoute

And it is used like this:

use Drupal\Core\Link; $link = Link::createFromRoute('This is a link', 'entity.node.canonical', ['node' => 1]);

Using the Url object gives you more flexibility to create links, for instance, we can do the same as Link::createFromRoute method using the Url object like this:

use Drupal\Core\Link; use Drupal\Core\Url; $link = Link::fromTextAndUrl('This is a link', Url::fromRoute('entity.node.canonical', ['node' => 1]));

And actually Link::fromTextAndUrl is what Drupal recommends instead of using the deprecated l() method. Passing the Url object to the link object gives you great flexibility to create links, here are some examples:

Internal links which have no routes:

$link = Link::fromTextAndUrl('This is a link', Url::fromUri('base:robots.txt'));

External links:

$link = Link::fromTextAndUrl('This is a link', Url::fromUri('http://www.google.com'));

Using the data provided by a user:

$link = Link::fromTextAndUrl('This is a link', Url::fromUserInput('/node/1');

The param passed to fromUserInput must start with /,#,? or it will throw an exception.

Linking entities.

$link = Link::fromTextAndUrl('This is a link', Url::fromUri('entity:node/1'));

Entities are a special case, and there are more ways to link them:

$node = Node::load(1); $link = $node->toLink(); $link->setText('This is a link');

And even using the route:

$link = Link::fromTextAndUrl('This is a link', Url::fromRoute('entity.node.canonical', ['node' => 1]));

Drupal usually expects a render array if you are going to print the link, so the Link object has a method for that:

$link->toRenderable();

which will return an array.

Final tips:

Searching a route using Drupal Console

The easiest way to find the route of a specific path is using Drupal Console, with the following command.

$ drupal router:debug | grep -i "\/node"

That will return something like:

entity.node.canonical /node/{node} entity.node.delete_form /node/{node}/delete entity.node.edit_form /node/{node}/edit entity.node.preview /node/preview/{node_preview}/{view_mode_id} entity.node.revision /node/{node}/revisions/{node_revision}/view entity.node.version_history /node/{node}/revisions node.add /node/add/{node_type} node.add_page /node/add node.multiple_delete_confirm /admin/content/node/delete node.revision_delete_confirm /node/{node}/revisions/{node_revision}/delete node.revision_revert_confirm /node/{node}/revisions/{node_revision}/revert node.revision_revert_translation_confirm /node/{node}/revisions/{node_revision}/revert/{langcode} search.help_node_search /search/node/help search.view_node_search /search/node view.frontpage.page_1 /node

Listing all the possible routes with that word, we can choose one and do:

drupal router:debug entity.node.canonical

And that will display more information about a specific route:

Route entity.node.canonical Path /node/{node} Defaults _controller \Drupal\node\Controller\NodeViewController::view _title_callback \Drupal\node\Controller\NodeViewController::title Requirements node \d+ _entity_access node.view _method GET|POST Options compiler_class \Drupal\Core\Routing\RouteCompiler parameters node: type: 'entity:node' converter: paramconverter.entity _route_filters method_filter content_type_header_matcher _route_enhancers route_enhancer.param_conversion _access_checks access_check.entity

So in this way we can search the route without the needing to search in all the *.routing.yml files and in this example the route is entity.node.canonical and the param expected is node.

Print links directly within a twig template

It is also possible to print links directly on the twig template with the following syntax:

<a href="{{url('entity.node.canonical', {'node': node.id( ) }}"> {{ 'This is a link'|t }} </a>

Add links inside a t() method.

If you want to add a link inside the t() method you need to pass the link as a string, something like this:

use Drupal\Core\Link; $link = Link::fromTextAndUrl('This is a link', Url::fromRoute('entity.node.canonical', ['node' => 1])); $this->t('You can click this %link' ['%link' => $link->toString()]);
Categories: FLOSS Project Planets

FSF Events: Richard Stallman va a estar en Santa Fe, Argentina

GNU Planet! - Fri, 2017-04-14 14:20
"El Software Libre y Tu Libertad" Richard Stallman hablará sobre las metas y la filosofía del movimiento del Software Libre, y el estado y la historia del sistema operativo GNU, el cual junto con el núcleo Linux, es actualmente utilizado por decenas de millones de personas en todo el mundo.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título, el lugar exacto, y la hora de la charla serán determinados.

Lugar:Será determinado

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Santa Fe.

Categories: FLOSS Project Planets

FSF Events: Richard Stallman va a estar en Buenos Aires, Argentina

GNU Planet! - Fri, 2017-04-14 13:52

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título, el lugar exacto, y la hora de la charla serán determinados.

Lugar:Será determinado

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Buenos Aires.

Categories: FLOSS Project Planets

FSF Events: Richard Stallman - "Por una Sociedad Digital Libre" (Buenos Aires, Argentina)

GNU Planet! - Fri, 2017-04-14 13:47
Existen muchas amenazas a la libertad en la sociedad digital, tales como la vigilancia masiva, la censura, las esposas digitales, el software privativo que controla a los usuarios y la guerra contra la práctica de compartir. El uso de servicios web presenta otras más amenazas a la libertad de los usuarios. Por último, no contamos con ningún derecho concreto para hacer nada en Internet, todas nuestras actividades en línea son precarias y podremos continuar con ellas siempre y cuando las empresas deseen cooperar.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

El título y la hora de la charla serán determinados.

Lugar:

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Buenos Aires.

Categories: FLOSS Project Planets

FSF Events: Richard Stallman to speak in Curitiba

GNU Planet! - Fri, 2017-04-14 13:36

Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Speech topic and start time to be determined.

Location: To be determined

Please fill out our contact form, so that we can contact you about future events in and around Curitiba.

Categories: FLOSS Project Planets

FSF Events: Richard Stallman to speak in Campinas, Brazil

GNU Planet! - Fri, 2017-04-14 13:19

Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Speech topic and start time to be determined.

Location: To be determined

Please fill out our contact form, so that we can contact you about future events in and around Campinas.

Categories: FLOSS Project Planets

Yasoob Khalid: Making a Reddit + Facebook Messenger Bot

Planet Python - Fri, 2017-04-14 12:13

Hi guys! I haven’t been programming a lot lately because of exams. However, on the past weekend I managed to get a hold of my laptop and crank out something useful. It was a Facebook messenger bot which servers you fresh memes, motivational posts, jokes and shower thoughts. It was the first time I had delved into bot creation. In this post I will teach you most of the stuff you need to know in order to get your bot off the ground.

First of all some screenshots of the final product:

Tech Stack

We will be making use of the following:

  • Flask framework for coding up the backend as it is lightweight and allows us to focus on the logic instead of the folder structure.
  • Heroku – For hosting our code online for free
  • Reddit – As a data source because it get’s new posts every minute
1. Getting things ready

Creating a Reddit app

We will be using Facebook, Heroku and Reddit. Firstly, make sure that you have an account on all three of these services. Next you need to create a Reddit application on this link.

In the above image you can already see the “motivation” app which I have created. Click on “create another app…” and follow the on-screen instructions.

The about and redirect url will not be used hence it is ok to leave them blank. For production apps it is better to put in something related to your project so that if you start making a lot of requests and reddit starts to notice it they can check the about page of you app and act in a more informed manner.

So now that your app is created you need to save the ‘client_id’ and ‘client_secret’ in a safe place.

One part of our project is done. Now we need to setup the base for our Heroku app.

Creating an App on Heroku

Go to this dashboard url and create a new application.

On the next page give your application a unique name.

From the next page click on “Heroku CLI” and download the latest Heroku CLI for your operating system. Follow the on-screen install instructions and come back once it has been installed.

Creating a basic Python application

The below code is taken from Konstantinos Tsaprailis’s website.

from flask import Flask, request import json import requests app = Flask(__name__) # This needs to be filled with the Page Access Token that will be provided # by the Facebook App that will be created. PAT = '' @app.route('/', methods=['GET']) def handle_verification(): print "Handling Verification." if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': print "Verification successful!" return request.args.get('hub.challenge', '') else: print "Verification failed!" return 'Error, wrong validation token' @app.route('/', methods=['POST']) def handle_messages(): print "Handling Messages" payload = request.get_data() print payload for sender, message in messaging_events(payload): print "Incoming from %s: %s" % (sender, message) send_message(PAT, sender, message) return "ok" def messaging_events(payload): """Generate tuples of (sender_id, message_text) from the provided payload. """ data = json.loads(payload) messaging_events = data["entry"][0]["messaging"] for event in messaging_events: if "message" in event and "text" in event["message"]: yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') else: yield event["sender"]["id"], "I can't echo this" def send_message(token, recipient, text): """Send the message text to recipient with id recipient. """ r = requests.post("https://graph.facebook.com/v2.6/me/messages", params={"access_token": token}, data=json.dumps({ "recipient": {"id": recipient}, "message": {"text": text.decode('unicode_escape')} }), headers={'Content-type': 'application/json'}) if r.status_code != requests.codes.ok: print r.text if __name__ == '__main__': app.run()

We will be modifying the file according to our needs. So basically a Facebook bot works like this:

  1. Facebook sends a request to our server whenever a user messages our page on Facebook.
  2. We respond to the Facebook’s request and store the id of the user and the message which was sent to our page.
  3. We respond to user’s message through Graph API using the stored user id and message id.

A detailed breakdown of the above code is available of this website. In this post I will mainly be focusing on the Reddit integration and how to use the Postgres Database on Heroku.

Before moving further let’s deploy the above Python code onto Heroku. For that you have to create a local Git repository. Follow the following steps:

$ mkdir messenger-bot $ cd messenger-bot $ touch requirements.txt app.py Procfile

Execute the above commands in a terminal and put the above Python code into the app.py file. Put the following into Procfile:

web: gunicorn app:app

Now we need to tell Heroku which Python libraries our app will need to function properly. Those libraries will need to be listed in the requirements.txt file. I am going to fast-forward a bit over here and simply copy the requirements from this post. Put the following lines into requirements.txt file and you should be good to go for now.

click==6.6 Flask==0.11 gunicorn==19.6.0 itsdangerous==0.24 Jinja2==2.8 MarkupSafe==0.23 requests==2.10.0 Werkzeug==0.11.10

Run the following command in the terminal and you should get a similar output:

$ ls Procfile app.py requirements.txt

Now we are ready to create a Git repository which can then be pushed onto Heroku servers. We will carry out the following steps now:

  • Login into Heroku
  • Create a new git repository
  • commit everything into the new repo
  • push the repo onto Heroku

The commands required to achieve this are listed below:

$ heroku login $ git init $ heroku git:remote -a $ git commit -am "Initial commit" $ git push heroku master ... remote: https://.herokuapp.com/ deployed to Heroku ... $ heroku config:set WEB_CONCURRENCY=3

Save the url which is outputted above after “remote” . It is the url of your Heroku app. We will  need it in the next step when we create a Facebook app.

Creating a Facebook App

Firstly we need a Facebook page. It is a requirement by Facebook to supplement every app with a relevant page.

Now we need to register a new app. Go to this app creation page and follow the instructions below.

Now head over to your app.py file and replace the PAT string on line 9 with the Page Access Token we saved above.

Commit everything and push the code to Heroku.

$ git commit -am "Added in the PAT" $ git push heroku master

Now if you go to the Facebook page and send a message onto that page you will get your own message as a reply from the page. This shows that everything we have done so far is working. If something does not work check your Heroku logs which will give you some clue about what is going wrong. Later, a quick Google search will help you resolve the issue. You can access the logs like this:

$ heroku logs -t -a

Note: Only your msgs will be replied by the Facebook page. If any other random user messages the page his messages will not be replied by the bot because the bot is currently not approved by Facebook. However if you want to enable a couple of users to test your app you can add them as testers. You can do so by going to your Facebook app’s developer page and following the onscreen instructions.

Getting data from Reddit

We will be using data from the following subreddits:

First of all let’s install Reddit’s Python library “praw“. It can easily be done by typing the following instructions in the terminal:

$ pip install praw

Now let’s test some Reddit goodness in a Python shell. I followed the docs which clearly show how to access Reddit and how to access a subreddit. Now is the best time to grab the “client_id” and “client_secret” which we created in the first part of this post.

$ python Python 2.7.13 (default, Dec 17 2016, 23:03:43) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import praw >>> reddit = praw.Reddit(client_id='**********', ... client_secret='*****************', ... user_agent='my user agent') >>> >>> submissions = list(reddit.subreddit("GetMotivated").hot(limit=None)) >>> submissions[-4].title u'[Video] Hi, Stranger.'

Note: Don’t forget to add in your own client_id and client_secret in place of ****

Let’s discuss the important bits here. I am using limit=None because I want to get back as many posts as I can. Initially this feels like an overkill but you will quickly see that when a user starts using the Facebook bot pretty frequently we will run out of new posts if we limit ourselves to 10 or 20 posts. An additional constraint which we will add is that we will only use the image posts from GetMotivated and Memes and only text posts from Jokes and ShowerThoughts. Due to this constraint only one or two posts from top 10 hot posts might be useful to us because a lot of video submissions are also done to GetMotivated.

Now that we know how to access Reddit using the Python library we can go ahead and integrate it into our app.py.

Firstly add some additional libraries into our requirements.txt so that it looks something like this:

$ cat requirements.txt click==6.6 Flask==0.11 gunicorn==19.6.0 itsdangerous==0.24 Jinja2==2.8 MarkupSafe==0.23 requests==2.10.0 Werkzeug==0.11.10 flask-sqlalchemy psycopg2 praw

Now if we only wanted to send the user an image or text taken from reddit, it wouldn’t have been very difficult. In the “send_message” function we could have done something like this:

import praw ... def send_message(token, recipient, text): """Send the message text to recipient with id recipient. """ if "meme" in text.lower(): subreddit_name = "memes" elif "shower" in text.lower(): subreddit_name = "Showerthoughts" elif "joke" in text.lower(): subreddit_name = "Jokes" else: subreddit_name = "GetMotivated" .... if subreddit_name == "Showerthoughts": for submission in reddit.subreddit(subreddit_name).hot(limit=None): payload = submission.url break ... r = requests.post("https://graph.facebook.com/v2.6/me/messages", params={"access_token": token}, data=json.dumps({ "recipient": {"id": recipient}, "message": {"attachment": { "type": "image", "payload": { "url": payload }} }), headers={'Content-type': 'application/json'}) ...

But there is one issue with this approach. How will we know whether a user has been sent a particular image/text or not? We need some kind of id for each image/text we send the user so that we don’t send the same post twice. In order to solve this issue we are going to take some help of Postgresql and the reddit posts id (Every post on reddit has a special id).

We are going to use a Many-to-Many relation. There will be two tables:

  • Users
  • Posts

Let’s first define them in our code and then I will explain how it will work:

from flask_sqlalchemy import SQLAlchemy ... app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL'] db = SQLAlchemy(app) ... relationship_table=db.Table('relationship_table', db.Column('user_id', db.Integer,db.ForeignKey('users.id'), nullable=False), db.Column('post_id',db.Integer,db.ForeignKey('posts.id'),nullable=False), db.PrimaryKeyConstraint('user_id', 'post_id') ) class Users(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(255),nullable=False) posts=db.relationship('Posts', secondary=relationship_table, backref='users' ) def __init__(self, name): self.name = name class Posts(db.Model): id=db.Column(db.Integer, primary_key=True) name=db.Column(db.String, unique=True, nullable=False) url=db.Column(db.String, nullable=False) def __init__(self, name, url): self.name = name self.url = url

So the user table has two fields. The name will be the id sent with the Facebook Messenger Webhook request. The posts will be linked to the other table, “Posts”. The Posts table has name and url field. “name” will be populated by the reddit submission id and the url will be populated by the url of that post. We don’t really need to have the “url” field. I will be using it for some other uses in the future hence I included it in the code.

So now the way our final code will work is this:

  • We request a list of posts from a particular subreddit. The following code: reddit.subreddit(subreddit_name).hot(limit=None)

    returns a generator so we don’t need to worry about memory

  • We will check whether the particular post has already been sent to the user in the past or not
  • If the post has been sent in the past we will continue requesting more posts from Reddit until we find a fresh post
  • If the post has not been sent to the user, we send the post and break out of the loop

So the final code of the app.py file is this:

from flask import Flask, request import json import requests from flask_sqlalchemy import SQLAlchemy import os import praw app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL'] db = SQLAlchemy(app) reddit = praw.Reddit(client_id='*************', client_secret='****************', user_agent='my user agent') # This needs to be filled with the Page Access Token that will be provided # by the Facebook App that will be created. PAT = '*********************************************' quick_replies_list = [{ "content_type":"text", "title":"Meme", "payload":"meme", }, { "content_type":"text", "title":"Motivation", "payload":"motivation", }, { "content_type":"text", "title":"Shower Thought", "payload":"Shower_Thought", }, { "content_type":"text", "title":"Jokes", "payload":"Jokes", } ] @app.route('/', methods=['GET']) def handle_verification(): print "Handling Verification." if request.args.get('hub.verify_token', '') == 'my_voice_is_my_password_verify_me': print "Verification successful!" return request.args.get('hub.challenge', '') else: print "Verification failed!" return 'Error, wrong validation token' @app.route('/', methods=['POST']) def handle_messages(): print "Handling Messages" payload = request.get_data() print payload for sender, message in messaging_events(payload): print "Incoming from %s: %s" % (sender, message) send_message(PAT, sender, message) return "ok" def messaging_events(payload): """Generate tuples of (sender_id, message_text) from the provided payload. """ data = json.loads(payload) messaging_events = data["entry"][0]["messaging"] for event in messaging_events: if "message" in event and "text" in event["message"]: yield event["sender"]["id"], event["message"]["text"].encode('unicode_escape') else: yield event["sender"]["id"], "I can't echo this" def send_message(token, recipient, text): """Send the message text to recipient with id recipient. """ if "meme" in text.lower(): subreddit_name = "memes" elif "shower" in text.lower(): subreddit_name = "Showerthoughts" elif "joke" in text.lower(): subreddit_name = "Jokes" else: subreddit_name = "GetMotivated" myUser = get_or_create(db.session, Users, name=recipient) if subreddit_name == "Showerthoughts": for submission in reddit.subreddit(subreddit_name).hot(limit=None): if (submission.is_self == True): query_result = Posts.query.filter(Posts.name == submission.id).first() if query_result is None: myPost = Posts(submission.id, submission.title) myUser.posts.append(myPost) db.session.commit() payload = submission.title break elif myUser not in query_result.users: myUser.posts.append(query_result) db.session.commit() payload = submission.title break else: continue r = requests.post("https://graph.facebook.com/v2.6/me/messages", params={"access_token": token}, data=json.dumps({ "recipient": {"id": recipient}, "message": {"text": payload, "quick_replies":quick_replies_list} }), headers={'Content-type': 'application/json'}) elif subreddit_name == "Jokes": for submission in reddit.subreddit(subreddit_name).hot(limit=None): if ((submission.is_self == True) and ( submission.link_flair_text is None)): query_result = Posts.query.filter(Posts.name == submission.id).first() if query_result is None: myPost = Posts(submission.id, submission.title) myUser.posts.append(myPost) db.session.commit() payload = submission.title payload_text = submission.selftext break elif myUser not in query_result.users: myUser.posts.append(query_result) db.session.commit() payload = submission.title payload_text = submission.selftext break else: continue r = requests.post("https://graph.facebook.com/v2.6/me/messages", params={"access_token": token}, data=json.dumps({ "recipient": {"id": recipient}, "message": {"text": payload} }), headers={'Content-type': 'application/json'}) r = requests.post("https://graph.facebook.com/v2.6/me/messages", params={"access_token": token}, data=json.dumps({ "recipient": {"id": recipient}, "message": {"text": payload_text, "quick_replies":quick_replies_list} }), headers={'Content-type': 'application/json'}) else: payload = "http://imgur.com/WeyNGtQ.jpg" for submission in reddit.subreddit(subreddit_name).hot(limit=None): if (submission.link_flair_css_class == 'image') or ((submission.is_self != True) and ((".jpg" in submission.url) or (".png" in submission.url))): query_result = Posts.query.filter(Posts.name == submission.id).first() if query_result is None: myPost = Posts(submission.id, submission.url) myUser.posts.append(myPost) db.session.commit() payload = submission.url break elif myUser not in query_result.users: myUser.posts.append(query_result) db.session.commit() payload = submission.url break else: continue r = requests.post("https://graph.facebook.com/v2.6/me/messages", params={"access_token": token}, data=json.dumps({ "recipient": {"id": recipient}, "message": {"attachment": { "type": "image", "payload": { "url": payload }}, "quick_replies":quick_replies_list} }), headers={'Content-type': 'application/json'}) if r.status_code != requests.codes.ok: print r.text def get_or_create(session, model, **kwargs): instance = session.query(model).filter_by(**kwargs).first() if instance: return instance else: instance = model(**kwargs) session.add(instance) session.commit() return instance relationship_table=db.Table('relationship_table', db.Column('user_id', db.Integer,db.ForeignKey('users.id'), nullable=False), db.Column('post_id',db.Integer,db.ForeignKey('posts.id'),nullable=False), db.PrimaryKeyConstraint('user_id', 'post_id') ) class Users(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(255),nullable=False) posts=db.relationship('Posts', secondary=relationship_table, backref='users' ) def __init__(self, name=None): self.name = name class Posts(db.Model): id=db.Column(db.Integer, primary_key=True) name=db.Column(db.String, unique=True, nullable=False) url=db.Column(db.String, nullable=False) def __init__(self, name=None, url=None): self.name = name self.url = url if __name__ == '__main__': app.run()

So put this code into app.py file and send it to Heroku.

$ git commit -am "Updated the code with Reddit feature" $ git push heroku master

One last thing is still remaining. We need to tell Heroku that we will be using the database. It is simple. Just issue the following command in the terminal:

$ heroku addons:create heroku-postgresql:hobby-dev --app <app_name>

This will create a free hobby database which is enough for our project. Now we only need to initialise the database with the correct tables. In order to do that we first need to run the Python shell on our Heroku server:

$ heroku run python

Now in the Python shell type the following commands:

>>> from app import db >>> db.create_all()

So now our project is complete. Congrats!

Let me discuss some interesting features of the code. Firstly, I am making use of the “quick-replies” feature of Facebook Messenger Bot API. This allows us to send some pre-formatted inputs which the user can quickly select. They will look something like this:

It is easy to display these quick replies to the user. With every post request to the Facebook graph API we send some additional data:

quick_replies_list = [{ "content_type":"text", "title":"Meme", "payload":"meme", }, { "content_type":"text", "title":"Motivation", "payload":"motivation", }, { "content_type":"text", "title":"Shower Thought", "payload":"Shower_Thought", }, { "content_type":"text", "title":"Jokes", "payload":"Jokes", }]

Another interesting feature of the code is how we determine whether a post is a text, image or a video post. In the GetMotivated subreddit some images don’t have a “.jpg” or “.png” in their url so we rely on

submission.link_flair_css_class == 'image'

This way we are able to select even those posts which do not have a known image extension in the url.

You might have noticed this bit of code in the app.py file:

payload = "http://imgur.com/WeyNGtQ.jpg"

It makes sure that if no new posts are found for a particular user (every subreddit has a maximum number of “hot” posts) we have at least something to return. Otherwise we will get a variable undeclared error.

Create if the User doesn’t exist:

The following function checks whether a user with the particular name exists or not. If it exists it selects that user from the db and returns it. In case it doesn’t exist (user), it creates it and then returns that newly created user:

myUser = get_or_create(db.session, Users, name=recipient) ... def get_or_create(session, model, **kwargs): instance = session.query(model).filter_by(**kwargs).first() if instance: return instance else: instance = model(**kwargs) session.add(instance) session.commit() return instance

I hope you guys enjoyed the post. Please comment below if you have any questions. I am also starting premium advertising on the blog. This will either be in the form of sponsored posts or blog sponsorship for a particular time. I am still fleshing out the details. If your company works with Python and wants to reach out to potential customers, please email me on yasoob (at) gmail.com.

Source: You can get the code from GitHub as well


Categories: FLOSS Project Planets

Drupal core announcements: Drupal core security release window on Wednesday, April 19, 2017

Planet Drupal - Fri, 2017-04-14 11:29
Start:  2017-04-19 12:00 - 23:00 UTC Organizers:  xjm catch cilefen David_Rothstein stefan.r Event type:  Online meeting (eg. IRC meeting)

The monthly security release window for Drupal 8 and 7 core will take place on Wednesday, April 19.

This does not mean that a Drupal core security release will necessarily take place on that date for any of the Drupal 8 or 7 branches, only that you should watch for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix or stable feature release on this date. The next window for a Drupal core patch (bug fix) release for all branches is Wednesday, May 03. The next scheduled minor (feature) release for Drupal 8 will be on Wednesday, October 5.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: FLOSS Project Planets

FeatherCast: Bryan Call, Apache Traffic Server and Traffic Control Summit

Planet Apache - Fri, 2017-04-14 10:40

The Apache Traffic Server and Traffic Control Summit is a mini conference that is happening in Miami at the same location as Apachecon. In this interview we briefly talk to Bryan Call from Apache Traffic Server project, about this mini conference.

https://feathercastapache.files.wordpress.com/2017/04/traffic-summit-call.mp3
Categories: FLOSS Project Planets

I Fix Drupal: Synchronising production Drupal database to a dev environment

Planet Drupal - Fri, 2017-04-14 10:13
Drush has a great feature that allows you to limit the data that will be exported when you dump a database. Using this feature you can pull the schema for the entire database while specifying tables who's data you wish to ignore. This can substantially reduce the time it takes to complete an sync operation, especially the time spent importing the DB on the target. The key to this piece of magic lies inside drushrc.php, where you can predefine those lists of tables that do not require data to be dumped using this option. $options['structure-tables']['common'] = array('cache', 'cache_*', '...
Categories: FLOSS Project Planets
Syndicate content