Feeds

DrupalEasy: DrupalEasy Podcast 209 - Local Development Environments

Planet Drupal - 9 hours 38 min ago

Direct .mp3 file download.

Ted Bowman and Mike Anello, both back from DrupalCon Nashville, spend some quality time together to catch up on all the latest happenings involving local development environments. Ted hosted some BoFs about the topic, and Mike posted a comparison of some of the more popular Docker-based, Drupal-focused local development tools, so we figured it was a good time to devote an entire podcast on the topic. In addition, Mike and Ted name their "favorite thing" from DrupalCon Nashville.

Discussion Sponsors Follow us on Twitter Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Categories: FLOSS Project Planets

Holger Levsen: 20180423-technohippieparadise

Planet Debian - Mon, 2018-04-23 20:25
Trouble in techno hippie paradise

So I'm in some 'jungle' in Brasil, enjoying a good time with some friends which another friend jokingly labeled as cryptohippies, enjoying the silence, nature, good food, some cats & dogs and 3g internet. Life is good here.

And then we decided to watch "Stare into the lights my pretties" and while it is a very good and insightful movie, it's also disturbing to see just how much we, as human societies, have changed ourselves mindlessly (or rather, out of our own minds) in very recent history.

Even though not a smartphone user myself and while seemingly aware and critical of many changes happening in the last two decades, the movie was still eyeopening to me. Now if there only werent 100 distractions per day I would maybe be able to build up on this. Or maybe I need to watch it every week, though this wouldn't work neither, as the movie explains so well...

The movie also reminded me why I dislike being cc:ed on email so much (unless urgent and when I'm subscribed to the list being posted to). Because usually during the day I (try to) ignore list mails, but I do check my personal inboxes. And if someone cc:s me, this breaks my lines of thoughts. So it seems I still need to get better at ignoring stuff, even if something is pushed to me. Maybe especially then. (And hints for good .procmail rules for this much appreciated.)

Another interesting point: while the number of people addicted to nicotine has been going down globally lately, the number of network addicts has outnumbered those by far now. And yet the long term effects of being online almost 24/365 have not yet been researched at all. The cigarette companies claimed that most doctors smoke. The IT industry claims it's normal to be online. What's your wakeup2smartphone time? Do you check email every day?

This movie also made me wonder what Debian's role will, can and should be in this future. (And where of course I don't only mean Debian, but free software, free societies, in general.)

So, this movie brings up many questions. (And nicely explains why people rather don't like that.) So go watch this movie! You will be touched, think and check your email/smartphone afterwards.

(And finally, of course it's ironic that the movie is on youtube. And so I learned that to download subtitles you need to tell youtube-dl so, and it's easiest by using --all-subs. And btw, youtube-dl-gui needs help with running with python3 and thus with getting into Debian.)

Categories: FLOSS Project Planets

Techiediaries - Django: Django 2 Tutorial for Beginners: Building a CRM

Planet Python - Mon, 2018-04-23 20:00

Throughout this beginner's tutorial for Django 2.0, we are going to learn to build web applications with Python and Django. This tutorial assumes no prior experience with Django, so we'll be covering the basic concepts and elements of the Django framework by emphasizing essential theory with practice.

Basically, we are going to learn Django fundamental concepts while building a real world real estate web application starting from the idea to database design to full project implementation and deployment.

This tutorial doesn't only cover fundamental basics of Django but also advanced concepts such as how to use and integrate Django with modern front end frameworks like Angular 2+, Vue and React.

What's Django?

Django is an open source Python based web framework for building web applications quickly.

  • It's a pragmatic framework designed for developers working on projects with strict dead lines.
  • It's perfect for quickly creating prototypes and then continue building them after clients approval.
  • It follows a Model View Controller (MVC) design pattern
  • Django uses the Python language, a general purpose, powerful and feature-rich programming language.
What's MVC?

MVC is a software architectural design pattern which encourages the separation of concerns and effective collaboration between designers and developers when working on the same project. It basically divides or separates your app into three parts:

  • Model: responsible for data storage and management,
  • View: responsible of representing and rendering the user interface or view,
  • Controller: responsible for handling logic to control the user interface and work with data model.

Thanks to MVC, you as a developer can work in the model and controller parts without being concerned with the user interface (left to designers) so if anything changes on the side of designers on the user interface, you can rest assured that you will not be affected.

Introduction to Python

Python is a general purpose programing language that's suitable for developing all kind of applications including web applications. Python is known by a clean syntax and a large standard library which contains a wide range of modules that can be used by developers to build their applications instead of reinventing the wheel.

Here is a list of features and characteristics of Python:

  • Python is an Object Oriented Language just like Java or C++. Also like Java, Python is an interpreted language that runs on top of its own virtual machine which makes it a portable language that can runs across every machine and operating system such as Linux, Windows and MAC.

  • Python is especially popular among the scientific community where it's used for creating numeric applications.

  • Python is also known by the great performance of its runtime environment which makes it a good alternative to PHP for developing web applications.

For more information you can head to http://python.org/ where you can also download Python binaries for supported systems.

For Linux and MAC, Python is included by default so you don't have to install it. For Windows just head over to the official Python website and grab your installer. Just like any normal Windows program, the installation dead process is easy and straightforward.

Why Using Django?

Due to its popularity and large community, Python has numerous web frameworks among them Django. So what makes Django the right choice for you or your next project?

Django is a batteries-included framework

Django includes a set of batteries that can be used to solve common web problems without reinventing the wheel such as:

  • the sites framework,
  • the auth system,
  • forms generation,
  • an ORM for abstracting database systems,
  • and a very powerful templating engine,
  • caching system,
  • RSS generation framework etc.
The Django ORM

Django has a powerful ORM (Object Relational Mapper) which allows developers to use Python OOP classes and methods instead of SQL tables and queries to work with SQL based databases. Thanks to the Django ORM, developers can work with any database system such as MySQL or PostgresSQL without knowing anything about SQL. In the same time the ORM doesn't get in the way. You can write custom SQL anytime you want especially if you need to optimize the queries against your server database for increased performance.

Support for Internationalization: i18n

You can use Django for writing web applications for other languages than English with a lot of ease thanks to its powerful support for internationalization or you can also create multi lingual websites

The Admin Interface

Django is a very suitable framework for quickly building prototypes thanks to its auto-generated admin interface.

You can generate a full fledged admin application that can be used to do all sorts of CRUD operations against your database models you have registered with the admin module using a few lines of code.

Community and Extensive Documentation

Django has a great community that has contributed all sorts of awesome things to Django from tutorials and books to reusable open source packages that extend the core framework to include solutions for even more web development problems without reinventing the wheel or wasting time implementing what other developers have already created.

Django has also one of the most extensive and useful documentation on the web which can gets you up and running with Django in no time.

As a conclusion, if you are looking for a web framework full of features that makes building web applications fun and easy and that has all what you can expect from a modern framework. Django is the right choice for you if you are a Python developer.

  • Python is a portable programming language that can be used anywhere its runtime environment is installed.

  • Django is a Python framework which can be installed on any system which supports the Python language.

In this tutorial part, we are going to see how to install Python and Django on the major available operating systems i.e Windows, Linux and MAC.

Installing Python

Depending on your operating system you may or may not need to install Python. In Linux and MAC OS Python is included by default. You may only need to update it if the installed version is outdated.

Installing Python On Windows

Python is not installed by default on Windows, so you'll need to grab the official installer from the official Python website at http://www.python.org/download/. Next launch the installer and follow the wizard to install Python just like any other Windows program.

Also make sure to add Python root folder to system path environment variable so you can execute the Python executable from any directory using the command prompt.

Next open a command prompt and type python. You should be presented with a Python Interactive Shell printing the current version of Python and prompting you to enter your Python commands (Python is an interpreted language)

Installing Python on Linux

If you are using a Linux system, there is a great chance that you already have Python installed but you may have an old version. In this case you can very easily update it via your terminal depending on your Linux distribution.

For Debian based distributions, like Ubuntu you can use the apt package manager

sudo apt-get install python

This will update your Python version to the latest available version.

For other Linux distributions you should look for equivalent commands to install or update Python which is not a daunting task if you already use a package manager to install packages for your system then you should follow the same process to install or update Python.

Installing Python on MAC OS

Just like Linux, Python is included by default on MAC but in case you have an old version you should be able to update it by going to [http://www.python.org/download/mac/](http://www.python.org/download/mac/ and grab a Python installer for MAC.

Now if you managed to install or update Python on your own system or in case you have verified that you already have an updated version of Python installed on your system let's continue by installing Django.

Installing PIP

PIP is a Python package manager which's used to install Python packages from Python Package Index which is more advanced than easy_install the default Python package manager that's installed by default when you install Python.

You should use PIP instaed of easy_install whenever you can but for installing PIP itself you should use easy_install. So let's first install PIP:

Open your terminal and enter:

$ sudo easy_install pip

You can now install Django on your system using pip

$ sudo pip install django

While you can do this to install Django, globally on your system, it's strongly not recommend. Instead you need to use a virtual environement to install packages.

virtualenv

virtualenv is a tool that allows you to work with multiple Python projects with different or the same (often different and conflicting versions) requirements on the same system without any problems by creating multiple and isolated virtual environments for Python packages.

Now lets first install virtualenv using pip :

$ sudo pip install virtualenv

Or you can install virtualenv before even installing pip from its official website.

In this case, you don't need to install pip because it comes installed with virtualenv and gets copied into any virtual environment you create.

Creating a Virtual Environment

After installing virtualenv you can now create your first virtual environment using your terminal:

$ cd ~/where-ever-you-want $ virtualenv env

Next you should activate your virtual environment:

$ source env/bin/activate

Now you can install any Python package using pip inside your created virtual environment.

Lets install Django!

Installing Django

After creating a new virtual environment and activating it. It's time to install Django using pip

$ pip install django

Django will only be installed on the activated virtual environment not globally.

Now lets summarize what we have done:

  • we first installed Python
  • we then installed pip and virtualenv to install packages from PyPI and create isolated virtual environments for multiple Python projects
  • finally we created a virtual environment and installed Django.

Now that we have installed required development tools including Django framework. It's time for the first real step to start building our real estate application while learning Django essentials from scratch.

Django framework includes a bunch of very useful utilities to create and manage projects that can be accessed from a Python file called django-admin.py that becomes available when we first installed Django.

In this section we are going to see how to:

  • create a new project
  • setup and create the project database
  • start the development server
Create a new Django project

Creating a new Django project is easy and quick so open your terminal or command prompt then enter:

$ django-admin.py startproject crm

This command will take care of creating a bunch of necessary files for the project.

Executing the tree command in the root of our created project will show us the files that were created.

. ├── crm │   ├── __init__.py │   ├── settings.py │   ├── urls.py │   └── wsgi.py └── manage.py

__init__ is the Python way to mark the containing folder as a Python package which means a Django project is a Python package.

settings.py is the project configuration file. You can use this file to specify every configuration option of your project such as the installed apps, site language and database options etc.

urls.py is a special Django file which maps all your web app urls to the views.

wsgi.py is necessary for starting a wsgi application server.

manage.py is another Django utility to manage the project including creating database and starting the local development server.

These are the basic files that you will find in every Django project. Now the next step is to set up and create the database.

Setting Up the Database

Using your favorite code editor or IDE, open your project settings.py file and lets configure the database.

DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } }

Django works with multiple database systems from simple to advanced systems (both open source and proprietary) such as SQLite, MySQL,PostgreSQL, SQL Server, Oracle etc.

Also you can switch to any database system whenever you want, even after starting developing your web app, without any problems thanks to Django ORM that abstracts how you can work with any database system.

For the sake of simplicity, we'll be using SQLite since it comes already installed with Python so we actually have our database configuration already set up for development. Next for deployment you can use an advanced database system such as MySQL or PostgreSQL by just editing this configuration option.

Finally we need to tell Django to actually create the database and tables. Even if we didn't create actual code or data for our app yet,Django needs to create many tables for its internal use. So lets create the database.

Creating the database and the tables is a matter of issuing this one command.

$ python manage.py migrate

You should get an output like:

Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying sessions.0001_initial... OK

Since we are using a SQLite database, you should also find a sqlite file under the current directory:

. ├── db.sqlite3 ├── crm │   ├── __init__.py │   ├── __init__.pyc │   ├── settings.py │   ├── settings.pyc │   ├── urls.py │   ├── urls.pyc │   └── wsgi.py └── manage.py Starting the local development server

Django has a local development server that can be used while developing your project. It's a simple and primitive server which's suitable only for development not for production.

To start the local server for your project, you can simply issue the following command inside your project root directory:

$ python manage.py runserver

Next navigate to http://localhost:8000/ with a web browser.

You should see a web page with a message:

It worked! Congratulations on your first Django-powered page.

Next, start your first app by running python manage.py startapp [app_label].

You're seeing this message because you have DEBUG = True in your Django settings file and you haven't configured any URLs. Get to work!

Conclusion

To conclude this tutorial, lets summarize what we have done: We have created a new Django project, created and migrated a SQLite database and started a local development server. In the next tutorial we are going to start creating our real estate prototype.

Categories: FLOSS Project Planets

DrupalEasy: Using Drupal's Linked Field module to output fields as links in view modes

Planet Drupal - Mon, 2018-04-23 19:07

With Drupal 8, the use of view modes (both default and custom) is gaining momentum. This is especially true because of their ability to be easily utilized by Views. Rather than specifying a list of fields with all the required configuration in a View configuration, many site-builders are finding it much easier to define a set of view modes for their entities, have them themed once, then re-used throughout the site - including as part of a View's output via the "Show: Content" option in a View's "Format" configuration. 

One hiccup that can slow this down is the not-uncommon occurrence of when a field needs to be output as a link to some destination. While this is relatively easy to do with Views' "Rewrite fields" options, there isn't an obvious solution when using view modes.

Enter the Linked Field module. As its name implies, it provides the ability for any field to be output as a link via its formatter settings. The formatter can be configured with a custom URL or it can use the value of a URL from another field value.

Think of this module as an "Output this field as a custom link" option for view modes!

The "Advanced" configuration section for the formatter settings includes the ability to custom things like the title, target, and class attributes for each link. 

If your goal is to maximize the use of display modes throughout your site, then this contributed module is an important tool in your arsenal.  

Categories: FLOSS Project Planets

Yasoob Khalid: Reverse Engineering Facebook: Public Video Downloader

Planet Python - Mon, 2018-04-23 18:09

In the last post we took a look at downloading songs from Soundcloud. In this post we will take a look at Facebook and how we can create a downloader for Facebook videos. It all started with me wanting to download a video from Facebook which I had the copyrights to. I wanted to automate the process so that I could download multiple videos with just one command. Now there are tools like youtube-dl which can do this job for you but I wanted to explore Facebook’s API myself. Without any further ado let me show you step by step how I approached this project. In this post we will cover downloading public videos. In the next post I will take a look at downloading private videos.

Step 1: Finding a Video

Find a video which you own and have copyrights to. Now there are two types of videos on Facebook. The main type is the public videos which can be accessed by anyone and then there are private videos which are accessible only by a certain subset of people on Facebook. Just to keep things easy, I initially decided to use a public video with plans on expanding the system for private videos afterwards.

Step 2: Recon

In this step we will open up the video in a new tab where we aren’t logged in just to see whether we can access these public videos without being logged in or not. I tried doing it for the video in question and this is what I got:

Apparently we can’t access the globally shared video as well without logging in. However, I remembered that I recently saw a video without being logged in and that piqued my interest. I decided to explore the original video a bit more.

I right-clicked on the original video just to check it’s source and to figure out whether the video url was reconstructable using the original page url. Instead of finding the video source, I found a different url which can be used to share this video. Take a look at these pictures to get a better understanding of what I am talking about:

I tried opening this url in a new window without being logged in and boom! The video opened! Now I am not sure whether it worked just by sheer luck or it really is a valid way to view a video without being logged in. I tried this on multiple videos and it worked each and every single time. Either Way, we have got a way to access the video without logging in and now it’s time to intercept the requests which Facebook makes when we try to play the video.

Open up Chrome developer tools and click on the XHR button just like this:

XHR stands for XMLHttpRequest and is used by the websites to request additional data using Javascript once the webpage has been loaded. Mozilla docs has a good explanation of it:

Use XMLHttpRequest (XHR) objects to interact with servers. You can retrieve data from a URL without having to do a full page refresh. This enables a Web page to update just part of a page without disrupting what the user is doing. XMLHttpRequest is used heavily in Ajax programming.

Filtering requests using XHR allows us to cut down the number of requests we would have to look through. It might not work always so if you don’t see anything interesting after filtering out requests using XHR, take a look at the “all” tab.

The XHR tab was interesting, it did not contain any API request. Instead the very first requested link was the mp4 video itself.

This was surprizing because usually companies like Facebook like to have an intermediate server so that they don’t have to hardcore the mp4 links in the webpage. However, if it is easy for me this way then who am I to complain?

My very next step was to search for this url in the original source of the page and luckily I found it:

This confirmed my suspicions. Facebook hardcores the video url in the original page if you view the page without signing in. We will late see how this is different when you are signed in. The url in current case is found in a <script> tag.

 

Step 3: Automating it

Now let’s write a Python script to download public videos. The script is pretty simple. Here is the code:

import requests as r import re import sys url = sys.argv[-1] html = r.get(url) video_url = re.search('hd_src:"(.+?)"', html.text).group(1) print(video_url)

Save the above code in a video_download.py file and use it like this:

$ python video_download.py video_url

Don’t forget to replace video_url with actual video url of this form:

https://www.facebook.com/username/videos/10213942282701232/

The script gets the video url from the command line. It then opens up the video page using requests and then uses regular expressions to parse the video url from the page. This might not work if the video isn’t available in HD. I leave that up to you to figure out how to handle that case.

That is all for today. I will cover the downloading of your private videos in the next post. That is a bit more involved and requires you logging into Facebook. Follow the blog and stay tuned! If you have any questions/comments/suggestions please use the comment form or email me.

Have a great day!

Categories: FLOSS Project Planets

Yasoob Khalid: Reverse Engineering Facebook API: Private Video Downloader

Planet Python - Mon, 2018-04-23 18:08

Welcome back! This is the third post in the reverse engineering series. The first post was reverse engineering Soundcloud API and the second one was reverse engineering Facebook API to download public videos. In this post we will take a look at downloading private videos. We will reverse engineer the API calls made by Facebook and will try to figure out how we can download videos in the HD format (when available).

Step 1: Recon

The very first step is to open up a private video in an incognito tab just to make sure we can not access it without logging it. This should be the response from Facebook:

This confirms that we can not access the video without logging in. Sometimes this is pretty obvious but it doesn’t hurt to check.

We know of our first step. It is to figure out a way to log-into Facebook using Python. Only after that can we access the video. Let’s login using the browser and check what information is required to log-in.

I won’t go into much detail for this step. The gist is that while logging in, the desktop website and the mobile website require roughly the same POST parameters but interestingly if you log-in using the mobile website you don’t have to supply a lot of additional information which the desktop website requires. You can get away with doing a POST request to the following URL with your username and password:

https://m.facebook.com/login.php

We will later see that the subsequent API requests will require a fb_dtsg parameter. The value of this parameter is embedded in the HTML response and can easily be extracted using regular expressions or a DOM parsing library.

Let’s continue exploring the website and the video API and see what we can find.

Just like what we did in the last post, open up the video, monitor the XHR requests in the Developer Tools and search for the MP4 request.

Next step is to figure out where the MP4 link is coming from. I tried searching the original HTML page but couldn’t find the link. This means that Facebook is using an XHR API request to get the URL from the server. We need to search through all of the XHR API requests and check their responses for the video URL. I did just that and the response of the third API request contained the MP4 link:

The API request was a POST request and the url was:

https://www.facebook.com/video/tahoe/async/10114393524323267/?chain=true&isvideo=true&originalmediaid=10214393524262467&playerorigin=permalink&playersuborigin=tahoe&ispermalink=true&numcopyrightmatchedvideoplayedconsecutively=0&storyidentifier=DzpfSTE1MzA5MDEwODE6Vks6MTAyMTQzOTMNjE4Njc&dpr=2

I tried to deconstruct the URL. The major dynamic parts of the URL seem to be the originalmediaid and storyidentifier. I searched the original HTML page and found that both of these were there in the original video page. We also need to figure out the POST data sent with this request. These are the parameters which were sent:

__user: <---redacted--> __a: 1 __dyn: <---redacted--> __req: 3 __be: 1 __pc: PHASED:DEFAULT __rev: <---redacted--> fb_dtsg: <---redacted--> jazoest: <---redacted--> __spin_r:  <---redacted--> __spin_b:  <---redacted--> __spin_t:  <---redacted-->

I have redacted most of the stuff so that my personal information is not leaked. But you get the idea. I again searched the HTML page and was able to find most of the information in the page. There was certain information which was not in the HTML page like jazoest but as we move along you will see that we don’t really need it to download the video. We can simply send an empty string in its place.

It seems like we have all the pieces we need to download a video. Here is an outline:

  1. Open the Video after logging in
  2. Search for the parameters in the HTML response to craft the API url
  3. Open the API url with the required POST parameters
  4. Search for hd_src or sd_src in the response of the API request

Now lets create a script to automate these tasks for us.

Step 2: Automate it

The very first step is to figure out how the login takes place. In the recon phase I mentioned that you can easily log-in using the mobile website. We will do exactly that. We will log-in using the mobile website and then open the homepage using the authenticated cookies so that we can extract the fb_dtsg parameter from the homepage for subsequent requests.

import requests import re import urllib.parse email = "" password = "" session = requests.session() session.headers.update({ 'User-Agent': 'Mozilla/5.0 (X11; Linux i686; rv:39.0) Gecko/20100101 Firefox/39.0' }) response = session.get('https://m.facebook.com') response = session.post('https://m.facebook.com/login.php', data={ 'email': email, 'pass': password }, allow_redirects=False)

Replace the email and password variable with your email and password and this script should log you in. How do we know whether we have successfully logged in? We can check for the presence of ‘c_user’ key in the cookies. If it exists then the login has been successful.

Let’s check that and extract the fb_dtsg from the homepage. While we are at that let’s extract the user_id from the cookies as well because we will need it later.

if 'c_user' in response.cookies: # login was successful homepage_resp = session.get('https://m.facebook.com/home.php') fb_dtsg = re.search('name="fb_dtsg" value="(.+?)"', homepage_resp.text).group(1) user_id = response.cookies['c_user']

So now we need to open up the video page, extract all of the required API POST arguments from it and do the POST request.

if 'c_user' in response.cookies: # login was successful homepage_resp = session.get('https://m.facebook.com/home.php') fb_dtsg = re.search('name="fb_dtsg" value="(.+?)"', homepage_resp.text).group(1) user_id = response.cookies['c_user'] video_url = "https://www.facebook.com/username/videos/101214393524261127/" video_id = re.search('videos/(.+?)/', video_url).group(1) video_page = session.get(video_url) identifier = re.search('ref=tahoe","(.+?)"', video_page.text).group(1) final_url = "https://www.facebook.com/video/tahoe/async/{0}/?chain=true&isvideo=true&originalmediaid={0}&playerorigin=permalink&playersuborigin=tahoe&ispermalink=true&numcopyrightmatchedvideoplayedconsecutively=0&storyidentifier={1}&dpr=2".format(video_id,identifier) data = {'__user': user_id, '__a': '', '__dyn': '', '__req': '', '__be': '', '__pc': '', '__rev': '', 'fb_dtsg': fb_dtsg, 'jazoest': '', '__spin_r': '', '__spin_b': '', '__spin_t': '', } api_call = session.post(final_url, data=data) try: final_video_url = re.search('hd_src":"(.+?)",', api_call.text).group(1) except AttributeError: final_video_url = re.search('sd_src":"(.+?)"', api_call.text).group(1) print(final_video_url)

You might be wondering what the data dictionary is doing and why there are a lot of keys with empty values. Like I said during the recon process, I tried making successful POST requests using the minimum amount of data. As it turns out Facebook only cares about fb_dtsg and the __user key. You can let everything else be an empty string. Make sure that you do send these keys with the request though. It doesn’t work if the key is entirely absent.

At the very end of the script we first search for the HD source and then the SD source of the video. If HD source is found we output that and if not then we output the SD source.

Our final script looks something like this:

import requests import re import urllib.parse import sys email = sys.argv[-2] password = sys.argv[-1] print("Email: "+email) print("Pass: "+password) session = requests.session() session.headers.update({ 'User-Agent': 'Mozilla/5.0 (X11; Linux i686; rv:39.0) Gecko/20100101 Firefox/39.0' }) response = session.get('https://m.facebook.com') response = session.post('https://m.facebook.com/login.php', data={ 'email': email, 'pass': password }, allow_redirects=False) if 'c_user' in response.cookies: # login was successful homepage_resp = session.get('https://m.facebook.com/home.php') fb_dtsg = re.search('name="fb_dtsg" value="(.+?)"', homepage_resp.text).group(1) user_id = response.cookies['c_user'] video_url = sys.argv[-3] print("Video url: "+video_url) video_id = re.search('videos/(.+?)/', video_url).group(1) video_page = session.get(video_url) identifier = re.search('ref=tahoe","(.+?)"', video_page.text).group(1) final_url = "https://www.facebook.com/video/tahoe/async/{0}/?chain=true&isvideo=true&originalmediaid={0}&playerorigin=permalink&playersuborigin=tahoe&ispermalink=true&numcopyrightmatchedvideoplayedconsecutively=0&storyidentifier={1}&dpr=2".format(video_id,identifier) data = {'__user': user_id, '__a': '', '__dyn': '', '__req': '', '__be': '', '__pc': '', '__rev': '', 'fb_dtsg': fb_dtsg, 'jazoest': '', '__spin_r': '', '__spin_b': '', '__spin_t': '', } api_call = session.post(final_url, data=data) try: final_video_url = re.search('hd_src":"(.+?)",', api_call.text).group(1) except AttributeError: final_video_url = re.search('sd_src":"(.+?)"', api_call.text).group(1) print(final_video_url.replace('\\',''))

I made a couple of changes to the script. I used sys.argv to get video_url, email and password from the command line. You can hardcore your username and password if you want.

Save the above file as facebook_downloader.py and run it like this:

$ python facebook_downloader.py video_url email password

Replace video_url with the actual video url like this https://www.facebook.com/username/videos/101214393524261127/ and replace the email and password with your actual email and password.

After running this script, it will output the source url of the video to the terminal. You can open the URL in your browser and from there you should be able to right-click and download the video easily.

I hope you guys enjoyed this quick tutorial on reverse engineering the Facebook API for making a video downloader. If you have any questions/comments/suggestions please put them in the comments below or email me. I will look at reverse engineering a different website for my next post. Follow my blog to stay updated!

Thanks! Have a great day!

 

Categories: FLOSS Project Planets

Jacob Rockowitz: Content First, Technology Second

Planet Drupal - Mon, 2018-04-23 17:54

Many years ago, I was going for a morning stroll in New Jersey with my dad. We walked past a complete stranger and my dad said, “Hello.” My parents are divorced and I lived (mostly) with my mom in Brooklyn, so I spent the majority of childhood in a place where no one says hello to strangers. I was bewildered so I asked him why did we just said hi to a complete stranger and he said...

That is truly what he said and I can honestly say, it is the one and only time I have ever heard Sheldon Rockowitz swear. He broke his polite demeanor to emphasize to me the importance of saying hi.

Fast forward 20 years. I am now a father and my four-year-old son is complaining that he has no one to play with. I point to the closest child his age and tell him to go up to the kid and say 'Hi, my name is Ben'. He did it and discovered the concept and value of initiating a greeting with a complete stranger. And he made a friend for the day at the playground.

Two years ago, we were on a summer trip in Montreal and hanging out in a playground. Like many parents, I was laying low on the sidelines, observing my child's interaction with other kids. Ben walks up to a kid on the swing set and says, "I am not sure what language to say ‘Hi’ in, but my name is Ben." Ben's natural ability to adjust his approach blew my mind because he not only found a way to communicate with a complete stranger who might not be comfortable speaking his language, but he also shifted his method based on what he knew - and this is equally important - what he didn’t know. Kids are the future, and they teach us valuable lessons all the time.

Ben taught me..

The Drupal community and software needs to say "Hi" and make people feel comfortable

Improving the evaluator experience has become one of our communities initiatives. There are many approaches and plenty of awesome people contributing...Read More

Categories: FLOSS Project Planets

Benjamin Mako Hill: Is English Wikipedia’s ‘rise and decline’ typical?

Planet Debian - Mon, 2018-04-23 17:20

This graph shows the number of people contributing to Wikipedia over time:

The number of active Wikipedia contributors exploded, suddenly stalled, and then began gradually declining. (Figure taken from Halfaker et al. 2013)

The figure comes from “The Rise and Decline of an Open Collaboration System,” a well-known 2013 paper that argued that Wikipedia’s transition from rapid growth to slow decline in 2007 was driven by an increase in quality control systems. Although many people have treated the paper’s finding as representative of broader patterns in online communities, Wikipedia is a very unusual community in many respects. Do other online communities follow Wikipedia’s pattern of rise and decline? Does increased use of quality control systems coincide with community decline elsewhere?

In a paper that my student Nathan TeBlunthuis is presenting Thursday morning at the Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI),  a group of us have replicated and extended the 2013 paper’s analysis in 769 other large wikis. We find that the dynamics observed in Wikipedia are a strikingly good description of the average Wikia wiki. They appear to reoccur again and again in many communities.

The original “Rise and Decline” paper (we’ll abbreviate it “RAD”) was written by Aaron Halfaker, R. Stuart Geiger, Jonathan T. Morgan, and John Riedl. They analyzed data from English Wikipedia and found that Wikipedia’s transition from rise to decline was accompanied by increasing rates of newcomer rejection as well as the growth of bots and algorithmic quality control tools. They also showed that newcomers whose contributions were rejected were less likely to continue editing and that community policies and norms became more difficult to change over time, especially for newer editors.

Our paper, just published in the CHI 2018 proceedings, replicates most of RAD’s analysis on a dataset of 769 of the  largest wikis from Wikia that were active between 2002 to 2010.  We find that RAD’s findings generalize to this large and diverse sample of communities.

We can walk you through some of the key findings. First, the growth trajectory of the average wiki in our sample is similar to that of English Wikipedia. As shown in the figure below, an initial period of growth stabilizes and leads to decline several years later.

The average Wikia wikia also experience a period of growth followed by stabilization and decline (from TeBlunthuis, Shaw, and Hill 2018).

We also found that newcomers on Wikia wikis were reverted more and continued editing less. As on Wikipedia, the two processes were related. Similar to RAD, we also found that newer editors were more likely to have their contributions to the “project namespace” (where policy pages are located) undone as wikis got older. Indeed, the specific estimates from our statistical models are very similar to RAD’s for most of these findings!

There were some parts of the RAD analysis that we couldn’t reproduce in our context. For example, there are not enough bots or algorithmic editing tools in Wikia to support statistical claims about their effects on newcomers.

At the same time, we were able to do some things that the RAD authors could not.  Most importantly, our findings discount some Wikipedia-specific explanations for a rise and decline. For example, English Wikipedia’s decline coincided with the rise of Facebook, smartphones, and other social media platforms. In theory, any of these factors could have caused the decline. Because the wikis in our sample experienced rises and declines at similar points in their life-cycle but at different points in time, the rise and decline findings we report seem unlikely to be caused by underlying temporal trends.

The big communities we study seem to have consistent “life cycles” where stabilization and/or decay follows an initial period of growth. The fact that the same kinds of patterns happen on English Wikipedia and other online groups implies a more general set of social dynamics at work that we do not think existing research (including ours) explains in a satisfying way. What drives the rise and decline of communities more generally? Our findings make it clear that this is a big, important question that deserves more attention.

We hope you’ll read the paper and get in touch by commenting on this post or emailing Nate if you’d like to learn or talk more. The paper is available online and has been published under an open access license. If you really want to get into the weeds of the analysis, we will soon publish all the data and code necessary to reproduce our work in a repository on the Harvard Dataverse.

Nate TeBlunthuis will be presenting the project this week at CHI in Montréal on Thursday April 26 at 9am in room 517D.  For those of you not familiar with CHI, it is the top venue for Human-Computer Interaction. All CHI submissions go through double-blind peer review and the papers that make it into the proceedings are considered published (same as journal articles in most other scientific fields). Please feel free to cite our paper and send it around to your friends!

This blog post, and the open access paper that it describes, is a collaborative project with Aaron Shaw, that was led by Nate TeBlunthuis. A version of this blog post was originally posted on the Community Data Science Collective blog. Financial support came from the US National Science Foundation (grants IIS-1617129,  IIS-1617468, and GRFP-2016220885 ), Northwestern University, the Center for Advanced Study in the Behavioral Sciences at Stanford University, and the University of Washington. This project was completed using the Hyak high performance computing cluster at the University of Washington.

Categories: FLOSS Project Planets

Jeff Geerling's Blog: Post-Mollom, what are the best options for preventing spam for Drupal?

Planet Drupal - Mon, 2018-04-23 15:14

Earlier this month, Mollom was officially discontinued. If you still have the Mollom module installed on some of your Drupal sites, form submissions that were previously protected by Mollom will behave as if Mollom was offline completely, meaning any spam Mollom would've prevented will be passed through.

For many Drupal sites, especially smaller sites that deal mostly with bot spam, there are a number of great modules that will help prevent 90% or more of all spam submissions, for example:

Categories: FLOSS Project Planets

Tryton News: New Tryton release 4.8

Planet Python - Mon, 2018-04-23 14:00

We are proud to announce the 4.8 release of Tryton. This is the last release that will support Python2, as decided on the last Tryton Unconference, next versions will be only Python3 compatible.

In this way we introduced a new way of dynamic reporting. For now it's only available on sale module, but the plan is to extend it to more modules in newer releases. The effort to make all the Desktop client features available on the Web client has continued on this release. This resulted in fixing many small details and adding some missing features on the web client. Of course this release also includes many bug fixes and performance improvements. We added Persian as an official language for Tryton.

As usual the migration from previous series is fully supported. Some manual operation may be required, see migration from 4.6 to 4.8.

Major changes for the user
  • The clients show a toggle button next to the search input for all models that can be deactivated. This allows the user to search for deactivated records and to know that the model is deactivable.
  • Until now, when changes on a record from a pop-up window were cancelled, the client resets it using the stored value from the server or deletes it if it was not yet stored. Now, the clients will restore the record to the state it had before opening the pop-up, which is a more expected behaviour for the user.
  • It's no longer possible to expand a node that has too much records. This is needed to prevent to consume all the resources of the client. In such case the client will switch to the form view where normally the children will be displayed in a list view which supports loading records on the fly when scrolling.
  • To help companies to be compliant with the right to erasure from the GDPR, a new wizard to erase a party has been developed. It erases personal information linked to the party like the name, addresses, contact mechanisms etc. It removes also those data from the history tables. Each module adds checks to prevent erasure if pending documents for the party still exist.
  • A name has been added to the contact mechanism. It can be used for example to indicate the name of the recipient of an email address or to distinguish between the phone number of reception, accounting and warehouse.
  • The default search on party will now also use contact mechanism values.
  • Similar to the design of the addresses which can be flagged for invoice or delivery usage, the contact mechanism received the same feature. So the code may now request a contact mechanism of a specific type. For example, it is now possible to define which email address of a party should be used to send the invoices.
  • All the matching criteria against product categories have been unified between all the modules. Any product category will match against itself or any parent category. This is the chosen behavior because it is the least astonishing.
Desktop
  • The desktop client already has mnemonic for all button but now they are also added to all field labels. This allow to jump quickly to any visible field by pressing ALT + <the underlined key>.
  • The desktop client has a new option to check if a new bug fix version has been published. It takes care of the notification on Windows and MacOS.
  • The Many2One fields in editable tree now show the icons to open the related record or clear the selection. This unifies the behaviour with the form view.
Web
  • Numeric values are now formatted with the user locale and use an input of type 'number' for editing. This allows to have the right virtual keyboard on mobile devices.
  • The web client finally receives the label that positions the selected record in the list and shows the number of records in the list.
  • The spell checking is now activated by the browser, so fields with the spell attribute defined will have spell checking activated.
  • The buttons of widgets are now skipped from tab navigation in the web client. The actions of those buttons are available via keyboard shortcuts.
  • The management of editable list/tree has been completely reworked. Now the full row becomes editable on the first click. The focus is kept on the line if it is not valid. The editing is stopped when the user clicks anywhere outside the view.
  • The sum list feature has been implemented in sao.
  • The same shortcuts of the Date and DateTime widgets available on tryton can now be used on web client.
  • Many2One fields are displayed on tree view as clickable link which opens the related record form view to allow quick edition.

We have pushed many small improvements which fix small jump of elements of the page:

Accounting
  • The general ledger accounts are now opened from the Income Statement rows. This allows to see the details of the computed amounts.
  • It happens that users need to customize the configuration of the chart of account that comes from a template. Until now, this would prevent any further update without loosing this customization. Now, the records that are synchronized with a template are read-only by default. A check box allows to edit the value and thus remove the record from the update process.
  • Users may create a second chart of account by mistake. There are very rare cases when such creation is valid. As it is a complex task to correct such mistake, we added a warning when creating a second chart of account.
  • Now an error is raised when closing a period if there are still asset lines running for it.
  • Until now, only one tax code was allowed per tax. This was too restrictive. For some country it was needed to create null children taxes to allow more complex reporting. Now, tax codes are no longer defined on the tax but instead they contain a list of tax lines. Those lines can define the base or the tax amount. On the report, the lines of each tax code are summed per period. All chart of accounts have been updated to follow this design.
Tax report on cash basis

The new account_tax_cash module allows to report taxes based on cash. The groups of taxes to report on cash basis are defined on the Fiscal Year or Period. But they can also be defined on the invoices per supplier.

The implementation of this new module also improved existing modules. The tax lines of closed period are verified against modification. The registration of payment on the invoice is limited to the amount of the invoice.

Spanish chart of account

The module, which was published for the first time in the last series 4.6, needs a deep cleaning. The last changes in the accounting modules raised concerns about choices made for this chart. So it was decided to temporary exclude the module from the release process and to not guarantee a migration path. The work to fix the module has started and we expect to be able to release a fixed version soon.

Invoicing
  • The description on invoice line is now optional. If a product is set the invoice report will show the product name instead of the line description.
  • The reconciliation date is now shown on the invoice instead of the Boolean reconciled. This provides more information for a single field.
  • The Move lines now show which invoice they pay.
  • An error is raised when trying to overpay an invoice.
Payment

A cron task has been added that will post automatically the clearing moves after a delay. The delay is configured on the payment journal.

Stripe Payments
  • If the customer disputes the payment, a dispute status will be update on the Tryton payment. When the dispute is closed, the payment is updated according if the company wins or loses.
  • Some missing charge events has been added. In particular, the refund event which may update the payment amount when it is partial.
Statement account_statement_ofx

This new module adds the automatic import of OFX file as statement. The OFX format is a common format which is supported in various countries, like the United States.

Stock
  • The stock quantity was only computed per product because it is the column stored in the stock move. But it may be useful for some cases to compute the stock quantity per template. Indeed products from the same template share the same default unit of measure so their quantities can be summed. This release adds on the products_by_location method the possibility to group by product columns like the template, but also a relate action from the template which show quantity per locations.
  • When there is a very large number of locations, the tree Locations Quantity becomes difficult to use. Especially if you are searching in which location a product is. So we added the option to open this view as a list, this way it is possible to search location by quantity of the product.
  • We found that there are two different expectation from users about the default behavior of the inventory when the quantity is not explicitly set. Some expect that the product quantity should be considered as 0. And others expect that the product quantity is not changed. So we added an option on the inventory to choose the behavior when an inventory line has no quantity.
  • Until now, the assignation process for supplier return shipment was not using children location but this did not work if the location was a view. Now we assign using the children if the location is a view.
  • The supplier shipment support to receive the goods directly in the storage location. This way the inventory step is skipped.
Project
  • Until now, only sub-projects having the same party were invoiced. Now an invoice for each different party will be created.
Sale
  • The description on sale line is now optional. This prevents to copy the product name to sale description as it is now shown on the sale report.
  • In case a different product is shipped to the customer if the invoice method is based on shipment, this shipped product will be used for the invoice. Previously only the initially sold product was always used.
  • Now it is possible to edit the header fields thanks to the new Wizard which takes care of recompute the lines according to the changes.
  • Reports on aggregated data has been added to the sale module. The report engine allows to browse the Revenue and Number of sale per:

    • Customer
    • Product
    • Category
    • Country > Subdivision

    Those data are over Period, Company and Warehouse. The reports also show a sparkline for the revenue trend which can be drilled down.

  • The sale with the shipment method On Invoice Paid will create the purchase requests and/or drop shipments when the lines are fully paid. Before they were created directly on validation.
  • The shipment cost is not more computed when returning goods.
sale_promotion_coupon

This new module allows to create coupons that are used to apply a promotion on the sale. The coupon can be configured to be usable only a specific number of times globally or per party.

Purchase
  • The product supplier can be used now on the purchase line. This allows to display the supplier definition of this product.
  • Now it is possible to edit the header fields thanks to the new Wizard which takes care of recompute the lines according to the changes.
  • The description on purchase line is now optional. This prevents to copy the product name to purchase description as it now shown on the purchase report. The same change have been applied on purchase requests and requisitions.
  • In case a different product is received from the supplier if the invoice method is based on shipment the received product will be used on the invoice. Previously the purchased product was always used.
  • The user is warned if he tries to confirm a purchase order for a different warehouse than the warehouse of the linked purchase request.
Purchase request quotation
  • This new module allows to manage requests for quotation to different suppliers. Each request will collect quotation information from the supplier. The preferred quotation will be used to create the purchase.
Notification
  • Now it is possible to filter which type of email to use for sending the notification.
  • The email notification skip the recipients if the target field is empty. For example if a notification is defined on the Invoice with the Party as recipient and the Party has not an email address, then the invoice will not be sent. Adding a fallback recipients the email is sent to specific user email which could be a secretary which will be in charge of sending it or a mailbox for a printer which will print it automatically etc.
Tooltips

The following modules have received tooltips:

  • account_credit_limit
  • account_dunning
  • carrier
  • carrier_weight
  • currency
  • product_attribute
Major changes for the developer
  • Starting from this release the tryton client will only support the version 3 of GTK+-3. This will allow to migrate it to Python3.
  • The group widget can be defined as expandable by adding the attribute expandable. If the value is 1, it starts expanded and if the value is 0, it starts unexpanded. Both clients support it.
  • To ensure that all buttons may have their access rights configured a new test has been added. We added also the string, help and confirm attributes to ir.model.button. So they can be shared between different views.
  • The monetary format is now defined on the language instead of the language. According to User Experience best practices the amount should be displayed in the user language format event if it's a foreign currency.
  • It's now possible to manually define an exceptional parent of a language. This allows to use a custom monetary format for each country of the Latin American language.
  • Dict fields are now stored using it's canonical representation. This allows to make equity comparison between them.
  • The language formatting has been simplified to expose the instance methods: format, currency and strftime. A classmethod get is added to return the language instance of the code or the transaction language.
  • The previous API for session management was based on the ORM methods. This makes more complicated to implement alternative session manager. We created a simplified API agnostic to the ORM: new, remove, check and reset.
  • If the database has the required features (for PostgreSQL: the unaccent function), the ilike search will be performed on unaccented strings per default on all Char. This can be deactivated by setting the attribute Char.search_unaccented to False.
  • We have added the support for EXCLUDE constraints. An EXCLUDE constraint is a kind of extension to the UNIQUE constraint which can be applied on a subset of the rows and on expression instead of only columns. For more information, please read the EXCLUDE documentation of PostgreSQL.
  • It is now possible for a module to register classes to the pool only if a specified sets of modules is activated. This replaces the previous silent skip. Existing modules that were relying on the old behaviour must be updated to use the depends keyword otherwise they will crash at start up.
  • Sometimes a module depends optionally on another but it may need to fill from the XML record a value for a field that is defined on the optional module. We added a depends keyword on the field which ignores it if the list of modules is not activated.
  • The clients now support the definition of a specific order and context when searching from a field like Many2One, Reference, One2Many etc. This is useful to have preferred record on top of the list of found records.
  • A new mixin has been added to add logical suppression to a Model. But also we ensure that the client is aware that the model is deactivable. All the modules have been updated to use this new mixin.
  • The context model name is now available on the screen context. This allows for example to change the behaviour of a wizard depending on the context model.
  • Tryton prevents by default to modify records that are part of the setup (which are created by XML file in the modules). This requires to make a query on the ModelData table on each modification or deletion. But usually only a few models have such records, so we now put in memory the list of models that should be checked. This allows to skip the query for most of the models and thus save some queries.
  • Buttons can be defined with PYSON expressions for the invisible or readonly attributes. Some times, the developer wants to be sure that the field used in the PYSON expressions are read by the client. A depends attributes have been added which ensure that the listed fields will be included in all the view where the button is displayed.
  • The administrator can now reset the password of the user with a simple click. The server will generate a random reset password which is available for 1 day by default and send it by email to the user. This reset password is only valid until the user has set a new password. It is also possible to reset this way the admin password using the trytond-admin command line tool.
  • The context has a new optional key _request which contains some information like the remote address, the HTTP host etc. Those values are correctly setup if the server run behind a proxy which set the X-Forwarded headers.
  • A malicious hacker could flood the LoginAttempt table by sending failing request for different logins. Even if the size of the record is limited and the records are purged frequently. The server now limits also the number of attempt per IP network. The size of the network can be configured for each version (IPv4 and IPv6). This are only the last level of protection, it is still recommended to use a proxy and to set up IDS.
  • The name attribute of the tag image can now be a field. In this case, it will display the icon from the value of the field.
  • Now it is possible to extend with the same Mixin all the existing pool objects that are subclasses of a target. An usage example is to extend the Report.convert method of all existing reports to add support for another engine.
  • We have decided to remove the MySQL backend from the core of Tryton. The back-end was not tested on our continuous integration server and so it has many failures like not supporting Python3. The current code has been move to its own repository. This module will not be part of the release process until some volunteer make it green on the test server.
  • The current form buttons are automatically added to the toolbar under the action menu for fast access. Now the developer can define under which toolbar menu the button will appear.
  • Tryton now uses LibreOffice instead of unoconv for converting between report formats. There were some issues with unoconv, which where fixed by using libreoffice directly. Now we publish also docker images with the suffix -office which contains all the requirements for the report conversion.
  • A new Currency.currency_rate_sql method has been added which returns a SQL query that produces for each currency the rate, start_date and end_date. This is useful to get a currency rate in a larger SQL query. This method uses the window functions if available on the database back-end to produce optimized query.
  • Since the introduction of context management in proteus, the client library, the context management was taken from different places in an inconsistent way. We changed the library to always use the context and configuration at the time the instance was created. Some testing scenario may need some adjustment as they could rely on the previous behavior.
Accounting
  • The previous API to reconcile lines allowed only to create one reconciliation at a time. But as this can trigger the processing of the invoice for example, it can be a bottleneck if you are reconciling a lot of different lines like a statement can do. So the API has been improved in the most backward compatible way to allow to create many reconciliation at once.
  • The invoice now has a method to add and remove payments which should always be used.
Web
  • The limit of authentication request per per network is also applied to the web users.
  • Thanks to the implementation of the exclude constraint, the uniqueness of the email of web user only applies to the active users.
Categories: FLOSS Project Planets

Sandipan Dey: Implementing a Soft-Margin Kernelized Support Vector Machine Binary Classifier with Quadratic Programming in R and Python

Planet Python - Mon, 2018-04-23 12:51
In this article, couple of implementations of the support vector machine binary classifier with quadratic programming libraries (in R and python respectively) and application on a few datasets are going to be discussed.  The following video lectures / tutorials / links have been very useful for the implementation: this one from MIT AI course this … Continue reading Implementing a Soft-Margin Kernelized Support Vector Machine Binary Classifier with Quadratic Programming in R and Python
Categories: FLOSS Project Planets

Security public service announcements: Drupal 7 and 8 core critical release on April 25th, 2018 PSA-2018-003

Planet Drupal - Mon, 2018-04-23 12:27

There will be a security release of Drupal 7.x, 8.4.x, and 8.5.x on April 25th, 2018 between 16:00 - 18:00 UTC. This PSA is to notify that the Drupal core release is outside of the regular schedule of security releases. For all security updates, the Drupal Security Team urges you to reserve time for core updates at that time because there is some risk that exploits might be developed within hours or days. Security release announcements will appear on the Drupal.org security advisory page.

This security release is a follow-up to the one released as SA-CORE-2018-002 on March 28.

  • Sites on 7.x or 8.5.x can immediately update when the advisory is released using the normal procedure.
  • Sites on 8.4.x should immediately update to the 8.4.8 release that will be provided in the advisory, and then plan to update to 8.5.3 or the latest security release as soon as possible (since 8.4.x no longer receives official security coverage).

The security advisory will list the appropriate version numbers for each branch. Your site's update report page will recommend the 8.5.x release even if you are on 8.4.x or an older release, but temporarily updating to the provided backport for your site's current version will ensure you can update quickly without the possible side effects of a minor version update.

Patches for Drupal 7.x, 8.4.x, 8.5.x and 8.6.x will be provided in addition to the releases mentioned above. (If your site is on a Drupal 8 release older than 8.4.x, it no longer receives security coverage and will not receive a security update. The provided patches may work for your site, but upgrading is strongly recommended as older Drupal versions contain other disclosed security vulnerabilities.)

This release will not require a database update.

The CVE for this issue is CVE-2018-7602. The Drupal-specific identifier for the issue will be SA-CORE-2018-004.

The Security Team or any other party is not able to release any more information about this vulnerability until the announcement is made. The announcement will be made public at https://www.drupal.org/security, over Twitter, and in email for those who have subscribed to our email list. To subscribe to the email list: login on Drupal.org, go to your user profile page, and subscribe to the security newsletter on the Edit » My newsletters tab.

Journalists interested in covering the story are encouraged to email security-press@drupal.org to be sure they will get a copy of the journalist-focused release. The Security Team will release a journalist-focused summary email at the same time as the new code release and advisory.
If you find a security issue, please report it at https://www.drupal.org/security-team/report-issue.

Categories: FLOSS Project Planets

Real Python: Python 3's pathlib Module: Taming the File System

Planet Python - Mon, 2018-04-23 10:00

Have you struggled with file path handling in Python? In Python 3.4 and above, the struggle is now over! You no longer need to scratch your head over code like:

>>> path.rsplit('\\', maxsplit=1)[0]

Or cringe at the verbosity of:

>>> os.path.isfile(os.path.join(os.path.expanduser('~'), 'realpython.txt'))

In this tutorial, you will see how to work with file paths—names of directories and files—in Python. You will learn new ways to read and write files, manipulate paths and the underlying file system, as well as see some examples of how to list files and iterate over them. Using the pathlib module, the two examples above can be rewritten using elegant, readable, and Pythonic code like:

>>> path.parent >>> (pathlib.Path.home() / 'realpython.txt').is_file()

Free PDF Download: Python 3 Cheat Sheet

The Problem With Python File Path Handling

Working with files and interacting with the file system are important for many different reasons. The simplest cases may involve only reading or writing files, but sometimes more complex tasks are at hand. Maybe you need to list all files in a directory of a given type, find the parent directory of a given file, or create a unique file name that does not already exist.

Traditionally, Python has represented file paths using regular text strings. With support from the os.path standard library, this has been adequate although a bit cumbersome (as the second example in the introduction shows). However, since paths are not strings, important functionality is spread all around the standard library, including libraries like os, glob, and shutil. The following example needs three import statements just to move all text files to an archive directory:

import glob import os import shutil for file_name in glob.glob('*.txt'): new_path = os.path.join('archive', file_name) shutil.move(file_name, new_path)

With paths represented by strings, it is possible, but usually a bad idea, to use regular string methods. For instance, instead of joining two paths with + like regular strings, you should use os.path.join(), which joins paths using the correct path separator on the operating system. Recall that Windows uses \ while Mac and Linux use / as a separator. This difference can lead to hard-to-spot errors, such as our first example in the introduction working for only Windows paths.

The pathlib module was introduced in Python 3.4 (PEP 428) to deal with these challenges. It gathers the necessary functionality in one place and makes it available through methods and properties on an easy-to-use Path object.

Early on, other packages still used strings for file paths, but as of Python 3.6, the pathlib module is supported throughout the standard library, partly due to the addition of a file system path protocol. If you are stuck on legacy Python, there is also a backport available for Python 2.

Time for action: let us see how pathlib works in practice.

Creating Paths

All you really need to know about is the pathlib.Path class. There are a few different ways of creating a path. First of all, there are classmethods like .cwd() (Current Working Directory) and .home() (your user’s home directory):

>>> import pathlib >>> pathlib.Path.cwd() PosixPath('/home/gahjelle/realpython/')

Note: Throughout this tutorial, we will assume that pathlib has been imported, without spelling out import pathlib as above. As you will mainly be using the Path class, you can also do from pathlib import Path and write Path instead of pathlib.Path.

A path can also be explicitly created from its string representation:

>>> pathlib.Path(r'C:\Users\gahjelle\realpython\file.txt') WindowsPath('C:/Users/gahjelle/realpython/file.txt')

A little tip for dealing with Windows paths: on Windows, the path separator is a backslash, \. However, in many contexts, backslash is also used as an escape character in order to represent non-printable characters. To avoid problems, use raw string literals to represent Windows paths. These are string literals that have an r prepended to them. In raw string literals the \ represents a literal backslash: r'C:\Users'.

A third way to construct a path is to join the parts of the path using the special operator /. The forward slash operator is used independently of the actual path separator on the platform:

>>> pathlib.Path.home() / 'python' / 'scripts' / 'test.py' PosixPath('/home/gahjelle/python/scripts/test.py')

The / can join several paths or a mix of paths and strings (as above) as long as there is at least one Path object. If you do not like the special / notation, you can do the same thing with the .joinpath() method:

>>> pathlib.Path.home().joinpath('python', 'scripts', 'test.py') PosixPath('/home/gahjelle/python/scripts/test.py')

Note that in the preceding examples, the pathlib.Path is represented by either a WindowsPath or a PosixPath. The actual object representing the path depends on the underlying operating system. (That is, the WindowsPath example was run on Windows, while the PosixPath examples have been run on Mac or Linux.) See the section Operating System Differences for more information.

Reading and Writing Files

Traditionally, the way to read or write a file in Python has been to use the built-in open() function. This is still true as the open() function can use Path objects directly. The following example finds all headers in a Markdown file and prints them:

path = pathlib.Path.cwd() / 'test.md' with open(path, mode='r') as fid: headers = [line.strip() for line in fid if line.startswith('#')] print('\n'.join(headers))

An equivalent alternative is to call .open() on the Path object:

with path.open(mode='r') as fid: ...

In fact, Path.open() is calling the built-in open() behind the scenes. Which option you use is mainly a matter of taste.

For simple reading and writing of files, there are a couple of convenience methods in the pathlib library:

  • .read_text(): open the path in text mode and return the contents as a string.
  • .read_bytes(): open the path in binary/bytes mode and return the contents as a bytestring.
  • .write_text(): open the path and write string data to it.
  • .write_bytes(): open the path in binary/bytes mode and write data to it.

Each of these methods handles the opening and closing of the file, making them trivial to use, for instance:

>>> path = pathlib.Path.cwd() / 'test.md' >>> path.read_text() <the contents of the test.md-file>

Paths can also be specified as simple file names, in which case they are interpreted relative to the current working directory. The following example is equivalent to the previous one:

>>> pathlib.Path('test.md').read_text() <the contents of the test.md-file>

The .resolve() method will find the full path. Below, we confirm that the current working directory is used for simple file names:

>>> path = pathlib.Path('test.md') >>> path.resolve() PosixPath('/home/gahjelle/realpython/test.md') >>> path.resolve().parent == pathlib.Path.cwd() True

Note that when paths are compared, it is their representations that are compared. In the example above, path.parent is not equal to pathlib.Path.cwd(), because path.parent is represented by '.' while pathlib.Path.cwd() is represented by '/home/gahjelle/realpython/'.

Picking Out Components of a Path

The different parts of a path are conveniently available as properties. Basic examples include:

  • .name: the file name without any directory
  • .parent: the directory containing the file, or the parent directory if path is a directory
  • .stem: the file name without the suffix
  • .suffix: the file extension
  • .anchor: the part of the path before the directories

Here are these properties in action:

>>> path PosixPath('/home/gahjelle/realpython/test.md') >>> path.name 'test.md' >>> path.stem 'test' >>> path.suffix '.md' >>> path.parent PosixPath('/home/gahjelle/realpython') >>> path.parent.parent PosixPath('/home/gahjelle') >>> path.anchor '/'

Note that .parent returns a new Path object, whereas the other properties return strings. This means for instance that .parent can be chained as in the last example or even combined with / to create completely new paths:

>>> path.parent.parent / ('new' + path.suffix) PosixPath('/home/gahjelle/new.md')

The excellent Pathlib Cheatsheet provides a visual representation of these and other properties and methods.

Moving and Deleting Files

Through pathlib, you also have access to basic file system level operations like moving, updating, and even deleting files. For the most part, these methods do not give a warning or wait for confirmation before information or files are lost. Be careful when using these methods.

To move a file, use either .rename() or .replace(). The difference between the two methods is that the latter will overwrite the destination path if it already exists, while the behavior of .rename() is more subtle. An existing file will be overwritten if you have permission to overwrite it.

When you are renaming files, useful methods might be .with_name() and .with_suffix(). They both return the original path but with the name or the suffix replaced, respectively.

For instance:

>>> path PosixPath('/home/gahjelle/realpython/test001.txt') >>> path.with_suffix('.py') PosixPath('/home/gahjelle/realpython/test001.py')

Directories and files can be deleted using .rmdir() and .unlink() respectively. (Again, be careful!)

Examples

In this section, you will see some examples of how to use pathlib to deal with simple challenges.

Counting Files

There are a few different ways to list many files. The simplest is the .iterdir() method, which iterates over all files in the given directory. The following example combines .iterdir() with the collections.Counter class to count how many files there are of each filetype in the current directory:

>>> import collections >>> collections.Counter(p.suffix for p in pathlib.Path.cwd().iterdir()) Counter({'.md': 2, '.txt': 4, '.pdf': 2, '.py': 1})

More flexible file listings can be created with the methods .glob() and .rglob() (recursive glob). For instance, pathlib.Path.cwd().glob('*.txt') returns all files with a .txt suffix in the current directory. The following only counts filetypes starting with p:

>>> import collections >>> collections.Counter(p.suffix for p in pathlib.Path.cwd().glob('*.p*')) Counter({'.pdf': 2, '.py': 1}) Display a Directory Tree

The next example defines a function, tree(), that will print a visual tree representing the file hierarchy, rooted at a given directory. Here, we want to list subdirectories as well, so we use the .rglob() method:

def tree(directory): print(f'+ {directory}') for path in sorted(directory.rglob('*')): depth = len(path.relative_to(directory).parts) spacer = ' ' * depth print(f'{spacer}+ {path.name}')

Note that we need to know how far away from the root directory a file is located. To do this, we first use .relative_to() to represent a path relative to the root directory. Then, we count the number of directories (using the .parts property) in the representation. When run, this function creates a visual tree like the following:

>>> tree(pathlib.Path.cwd()) + /home/gahjelle/realpython + directory_1 + file_a.md + directory_2 + file_a.md + file_b.pdf + file_c.py + file_1.txt + file_2.txt

Note: The f-strings only work in Python 3.6 and later. In older Pythons, the expression f'{spacer}+ {path.name}' can be written '{0}+ {1}'.format(spacer, path.name).

Find the Last Modified File

The .iterdir(), .glob(), and .rglob() methods are great fits for generator expressions and list comprehensions. To find the file in a directory that was last modified, you can use the .stat() method to get information about the underlying files. For instance, .stat().st_mtime gives the time of last modification of a file:

>>> from datetime import datetime >>> time, file_path = max((f.stat().st_mtime, f) for f in directory.iterdir()) >>> print(datetime.fromtimestamp(time), file_path) 2018-03-23 19:23:56.977817 /home/gahjelle/realpython/test001.txt

You can even get the contents of the file that was last modified with a similar expression:

>>> max((f.stat().st_mtime, f) for f in directory.iterdir())[1].read_text() <the contents of the last modified file in directory>

The timestamp returned from the different .stat().st_ properties represents seconds since January 1st, 1970. In addition to datetime.fromtimestamp, time.localtime or time.ctime may be used to convert the timestamp to something more usable.

Create a Unique File Name

The last example will show how to construct a unique numbered file name based on a template. First, specify a pattern for the file name, with room for a counter. Then, check the existence of the file path created by joining a directory and the file name (with a value for the counter). If it already exists, increase the counter and try again:

def unique_path(directory, name_pattern): counter = 0 while True: counter += 1 path = directory / name_pattern.format(counter) if not path.exists(): return path path = unique_path(pathlib.Path.cwd(), 'test{:03d}.txt')

If the directory already contains the files test001.txt and test002.txt, the above code will set path to test003.txt.

Operating System Differences

Earlier, we noted that when we instantiated pathlib.Path, either a WindowsPath or a PosixPath object was returned. The kind of object will depend on the operating system you are using. This feature makes it fairly easy to write cross-platform compatible code. It is possible to ask for a WindowsPath or a PosixPath explicitly, but you will only be limiting your code to that system without any benefits. A concrete path like this can not be used on a different system:

>>> pathlib.WindowsPath('test.md') NotImplementedError: cannot instantiate 'WindowsPath' on your system

There might be times when you need a representation of a path without access to the underlying file system (in which case it could also make sense to represent a Windows path on a non-Windows system or vice versa). This can be done with PurePath objects. These objects support the operations discussed in the section on Path Components but not the methods that access the file system:

>>> path = pathlib.PureWindowsPath(r'C:\Users\gahjelle\realpython\file.txt') >>> path.name 'file.txt' >>> path.parent PureWindowsPath('C:/Users/gahjelle/realpython') >>> path.exists() AttributeError: 'PureWindowsPath' object has no attribute 'exists'

You can directly instantiate PureWindowsPath or PurePosixPath on all systems. Instantiating PurePath will return one of these objects depending on the operating system you are using.

Paths as Proper Objects

In the introduction, we briefly noted that paths are not strings, and one motivation behind pathlib is to represent the file system with proper objects. In fact, the official documentation of pathlib is titled pathlib — Object-oriented filesystem paths. The Object-oriented approach is already quite visible in the examples above (especially if you contrast it with the old os.path way of doing things). However, let me leave you with a few other tidbits.

Independently of the operating system you are using, paths are represented in Posix style, with the forward slash as the path separator. On Windows, you will see something like this:

>>> pathlib.Path(r'C:\Users\gahjelle\realpython\file.txt') WindowsPath('C:/Users/gahjelle/realpython/file.txt')

Still, when a path is converted to a string, it will use the native form, for instance with backslashes on Windows:

>>> str(pathlib.Path(r'C:\Users\gahjelle\realpython\file.txt')) 'C:\\Users\\gahjelle\\realpython\\file.txt'

This is particularly useful if you are using a library that does not know how to deal with pathlib.Path objects. This is a bigger problem on Python versions before 3.6. For instance, in Python 3.5, the configparser standard library can only use string paths to read files. The way to handle such cases is to do the conversion to a string explicitly:

>>> from configparser import ConfigParser >>> path = pathlib.Path('config.txt') >>> cfg = ConfigParser() >>> cfg.read(path) # Error on Python < 3.6 TypeError: 'PosixPath' object is not iterable >>> cfg.read(str(path)) # Works on Python >= 3.4 ['config.txt']

Possibly the most unusual part of the pathlib library is the use of the / operator. For a little peek under the hood, let us see how that is implemented. This is an example of operator overloading: the behavior of an operator is changed depending on the context. You have seen this before. Think about how + means different things for strings and numbers. Python implements operator overloading through the use of double underscore methods (a.k.a. dunder methods).

The / operator is defined by the .__truediv__() method. In fact, if you take a look at the source code of pathlib, you’ll see something like:

class PurePath(object): def __truediv__(self, key): return self._make_child((key,)) Conclusion

Since Python 3.4, pathlib has been available in the standard library. With pathlib, file paths can be represented by proper Path objects instead of plain strings as before. These objects make code dealing with file paths:

  • Easier to read, especially because / is used to join paths together
  • More powerful, with most necessary methods and properties available directly on the object
  • More consistent across operating systems, as peculiarities of the different systems are hidden by the Path object

In this tutorial, you have seen how to create Path objects, read and write files, manipulate paths and the underlying file system, as well as some examples of how to iterate over many file paths.

Free PDF Download: Python 3 Cheat Sheet

[ Improve Your Python With

Categories: FLOSS Project Planets

Yasoob Khalid: Reverse Engineering Facebook: Public Video Downloader

Planet Python - Mon, 2018-04-23 09:48

In the last post we took a look at downloading songs from Soundcloud. In this post we will take a look at Facebook and how we can create a downloader for Facebook videos. It all started with me wanting to download a video from Facebook which I had the copyrights to. I wanted to automate the process so that I could download multiple videos with just one command. Now there are tools like youtube-dl which can do this job for you but I wanted to explore Facebook’s API myself. Without any further ado let me show you step by step how I approached this project. In this post we will cover downloading public videos. In the next post I will take a look at downloading private videos.

Step 1: Finding a Video

Find a video which you own and have copyrights to. Now there are two types of videos on Facebook. The main type is the public videos which can be accessed by anyone and then there are private videos which are accessible only by a certain subset of people on Facebook. Just to keep things easy, I initially decided to use a public video with plans on expanding the system for private videos afterwards.

Step 2: Recon

In this step we will open up the video in a new tab where we aren’t logged in just to see whether we can access these public videos without being logged in or not. I tried doing it for the video in question and this is what I got:

Apparently we can’t access the globally shared video as well without logging in. However, I remembered that I recently saw a video without being logged in and that piqued my interest. I decided to explore the original video a bit more.

I right-clicked on the original video just to check it’s source and to figure out whether the video url was reconstructable using the original page url. Instead of finding the video source, I found a different url which can be used to share this video. Take a look at these pictures to get a better understanding of what I am talking about:

I tried opening this url in a new window without being logged in and boom! The video opened! Now I am not sure whether it worked just by sheer luck or it really is a valid way to view a video without being logged in. I tried this on multiple videos and it worked each and every single time. Either Way, we have got a way to access the video without logging in and now it’s time to intercept the requests which Facebook makes when we try to play the video.

Open up Chrome developer tools and click on the XHR button just like this:

XHR stands for XMLHttpRequest and is used by the websites to request additional data using Javascript once the webpage has been loaded. Mozilla docs has a good explanation of it:

Use XMLHttpRequest (XHR) objects to interact with servers. You can retrieve data from a URL without having to do a full page refresh. This enables a Web page to update just part of a page without disrupting what the user is doing. XMLHttpRequest is used heavily in Ajax programming.

Filtering requests using XHR allows us to cut down the number of requests we would have to look through. It might not work always so if you don’t see anything interesting after filtering out requests using XHR, take a look at the “all” tab.

The XHR tab was interesting, it did not contain any API request. Instead the very first requested link was the mp4 video itself.

This was surprizing because usually companies like Facebook like to have an intermediate server so that they don’t have to hardcore the mp4 links in the webpage. However, if it is easy for me this way then who am I to complain?

My very next step was to search for this url in the original source of the page and luckily I found it:

This confirmed my suspicions. Facebook hardcores the video url in the original page if you view the page without signing in. We will late see how this is different when you are signed in. The url in current case is found in a <script> tag.

 

Step 3: Automating it

Now let’s write a Python script to download public videos. The script is pretty simple. Here is the code:

import requests as r import re import sys url = sys.argv[-1] html = r.get(url) video_url = re.search('hd_src:"(.+?)"', html.text).group(1) print(video_url)

Save the above code in a video_download.py file and use it like this:

$ python video_download.py video_url

Don’t forget to replace video_url with actual video url of this form:

https://www.facebook.com/username/videos/10213942282701232/

The script gets the video url from the command line. It then opens up the video page using requests and then uses regular expressions to parse the video url from the page. This might not work if the video isn’t available in HD. I leave that up to you to figure out how to handle that case.

That is all for today. I will cover the downloading of your private videos in the next post. That is a bit more involved and requires you logging into Facebook. Follow the blog and stay tuned! If you have any questions/comments/suggestions please use the comment form or email me.

Have a great day!

Categories: FLOSS Project Planets

Bryan Pendleton: Magpie Murders: a very short review

Planet Apache - Mon, 2018-04-23 09:27

Magpie Murders is by Anthony Horowitz, who is not well known to me as an author, although apparently his young adult "Alex Rider" series is tremendously popular.

However, as a screenwriter, he wrote the beyond-wonderful Foyle's War, which by itself would be the accomplishment of a lifetime.

(And before that he adapted Caroline Graham's Inspector Graham series into Midsomer Murders! What a resume!)

Magpie Murders is a delightfully-executed showpiece of a murder mystery. Its hook is that it's a book-within-a-book, in which our heroine is the editor at a small independent press which publishes a series of cottage mysteries set in rural 1950's England. She has just received the latest in the series, Magpie Murders, only to discover that it is the last, for the detective Atticus Pund has been diagnosed with a terminal illness.

Only then it turns out that the author of these mysteries is himself rather a mystery; soon there is plot and intrigue both within and without the book, as our heroine tries to figure out what clues the book itself reveals about its author and his circumstances.

Without giving too much away, it turns out that our (fictional) author, who has become quite wealthy by making a career of writing murder mysteries, fancies himself a author of serious talents, and is disappointed that his attempts to write "literature" have been unsuccessful. Perhaps this is actually a book-within-a-book-within-a-book?

Along the way there are twists and turns, there are a delightful cast of characters both within the murder mystery and without, and there are entertaining sequences both in England of the 1950's as well as in England of present times.

And, this being an English murder mystery, there is wordplay, there are artifices, and, of course, there are castles, moats, and a vicar with a squeaky bicycle.

The endings, both of the book, and of the book, are quite cleverly arranged and delivered, and are very satisfying.

It's all truly delightful, even if it does seem rather like something you should be enjoying with your blueberry scones, clotted cream, and a nice pot of Earl Grey.

Recently my thriller diet has been considerably more gritty; mild disputations between the groundskeeper and the assistant at the surgeon's office are a fair bit afield.

Still, Horowitz is an author of tremendous skill, and I thoroughly enjoyed myself.

Categories: FLOSS Project Planets

ComputerMinds.co.uk: Rebranding ComputerMinds - Part 3: Website design

Planet Drupal - Mon, 2018-04-23 08:47

Now that we had settled on the branding and had established and planned exactly what we needed to create, I could start looking at designing the new website.

I wanted a clean, spacious site with a modern look and feel. I always keep a close eye on changing design trends and it was important that for our site I was careful to design something that would age well.

Over the years, Photoshop was my go-to software choice when designing, but after becoming increasingly frustrated by slow workflow I recently switched to using Sketch for larger projects. Sketch uses a modular, symbol based design where elements are quickly and easily reused. This makes edits super easy - changing the design of a button would change it everywhere it is used, not allowing inconsistencies to creep in.

Document setup involved ensuring some useful plugins were installed, then creating a new document with the standard Sketch template ‘Web Design’. This template gives you three pages to get going - one for assets (symbols), one for a style guide and one for the actual designs. Pages in Sketch are basically just canvasses but having these three allows easy navigation and quick workflow. An adjustment to the layout settings to ensure we had the correct size grid and we could almost get going.

The last thing I did was to, in the style guide page, create some simple shapes of all the colours from the branding guidelines that I created previously. I then created styles with these colours. This way, the colours would be consistent and if they ever changed it was easy to change them in one place, and not every item that used them.

Starting with the desktop header (as a symbol so it could be reused on each page), I immediately threw in the compact logo, without the word ‘ComputerMinds’. This was following on from an earlier decision to move away from the company name being so prominent. The address bar would always show the name anyway. After years of always seeing company logos on the left of the header, I wanted to move away from this. The recent rebrand of The Guardian saw a move away from this too, the new logo firmly positioned on the right of the header. Now, we had something most didn’t, a symmetrical logo. This meant we could have a symmetrical header by placing the logo in the middle.

Next up came the footer, also a reusable symbol. As we weren’t using the company name in the header this was a perfect opportunity to use a larger logo. Keeping it clean, I added the compact logo, contact information, social links, 2 menus and legal information, meeting requirements.

Once the the header and footer of a site are designed it really begins to look like a website. Before I could design any other page elements I needed to design something that would also be seen on almost every page - buttons. And with buttons come form elements. I've seen a recent trend with buttons with square corners and with either block colours or simple line borders. For our site I wanted something a little different so added a subtle shadow and simple movement on hover.

As with many of our recent development projects, we wanted to use tiles. This would allow us to create pages using tiles as building blocks, which is exactly what we’ll need for most pages. So these were designed next. As mentioned earlier, I wanted to create a more engaging experience for users by encouraging more visual content. With this in mind I created a collection of tile designs that incorporated large images and video and used the full width of the screen, bleeding off of the grid. These also used consistent, large headings and body text.

As well as visually engaging tiles, it was also necessary to create tiles to communicate through text alone. Using the colour palette created in the branding phase, I designed tile types that would allow variation visually through background colours, leaving the text to be communicated clearly. It’s also worth noting the prominence given to headings. Purposefully large to work well with whitespace, the headings make it absolutely clear what the visitor is reading.

During the designing of the tiles, I was also always thinking of the pages the site would have to be sure that the tiles were designed with the likely content in mind. Creating each tile type as a symbol in Sketch allowed me to quickly create page designs for multiple pages and adjust styles centrally.

In addition to these pages, we also required designs for pages that would not be made using tiles. A big part of our site and one we’re proud of is our many articles. Starting with a single article page I wanted to design a page that focussed the viewer on the content, without distraction. To achieve this I created white space either side of the content, narrowing the page and focussing the viewer (you're locked in right now, aren't you!). I also incorporated a title tile to display the article title clearly and created a header and footer for the article with all information the viewer might expect. These title tiles are also used on most pages of the site, making it absolutely clear which page you are on and adding further consistency. 

So the design was now complete. We had headers, footers, tiles, form elements, headings, body and more in all screen sizes. Such as the way we operate at ComputerMinds, this was all subject to adjustment as the project grows and more people get involved the requirements may change. As I said before, this is not a problem when using Sketch with the reusable elements.

Next up, we were to try something new - creating the front end using a pattern lab!

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Stacy Morse

Planet Python - Mon, 2018-04-23 08:30

This week we welcome Stacy Morse (@geekgirlbeta) as our PyDev of the Week! Stacy loves Python and has been writing about it on her blog as well as giving talks at various user groups and conferences. You can catch her at PyCon 2018 in Ohio this year where she will be talking about code reviews. Let’s take a few moments to get to know her better!

Can you tell us a little about yourself (hobbies, education, etc):

I have a degree in Art, concentration in Photography and design. I like to spend as much time as I can hiking and taking macro photographs of moss and the natural life cycle of the forest.

I also like to build. Anything from projects using micro-controllers to elaborate sewing projects.

Why did you start using Python?

I started using Python as a way to light my photography out in the woods. I need a lot of control to illuminate tiny scenes. Micro Python allowed me to make small custom LED arrays and have a lot of control over them.

What other programming languages do you know and which is your favorite?

JavaScript, Python, and I’m dabbling in Clojure. I have to say, Python is by far my favorite. The language and community has everything to do with it. I’ve made some amazing friends all over the world because of Python.

What projects are you working on now?

One of the more interesting and fun projects I’m working on is a Bluetooth controller for presentations. I’m hoping to have it finished by the time I give my talk about code reviews at PyCon 2018. When it’s finished I’ll install the programmed micro-controllers into a Lightsaber hilt. I’ll have the ability to control the forward, backward clicks as well as turn on and off sound effects that will be triggered by a gyroscope. Time permitting I’ll throw in a laser pointer.

There are other projects, but this is the one I’m most excited to talk about.

Which Python libraries are your favorite (core or 3rd party)?

I really enjoyed using TensorFlow and matplotlib. I would like to get to use them in more projects.

I’d have to also mention the Hashids open source library. I went as far as refactoring some of my first Python code just to use it and write a blog post about it. It’s one of those topics I’d like to see covered more, especially for the newcomers to Python.

Is there anything else you’d like to say?

I’d like to thank the entire Python community, they are a very inspiring group. I’ve always felt very welcome and encouraged within it.

Thanks for doing the interview!

Categories: FLOSS Project Planets

Vincent Bernat: A more privacy-friendly blog

Planet Debian - Mon, 2018-04-23 04:01

When I started this blog, I embraced some free services, like Disqus or Google Analytics. These services are quite invasive for users’ privacy. Over the years, I have tried to correct this to reach a point where I do not rely on any “privacy-hostile” services.

Analytics

Tim Millwood: Getting started with React and Drupal

Planet Drupal - Mon, 2018-04-23 02:20
Getting started with React and Drupal

Over the weekend I decided it was long overdue that I learnt React, or at least understood what all the fuss was about, so with npm in hand I installed yarn and started my quest.

We're going to use Create React App to setup our base React install. First install then run the command to create a react app called "drupal-react":
npm install -g create-react-app
create-react-app drupal-react
cd drupal-react

You can now run npm start (or yarn start) to start your app locally and open it in a browser. Here you'll see a React default page, this is all created from a React component called "App". If you take a look at the file src/App.js you will see the component and how the render() method returns the page HTML as JSX. We need to replace to the code returned here to show some Drupal nodes, so how about replacing it with (or just adding) <NodeContainer />. This will call a new component, so at the top of app.js we will also need to import that, so with the other import code add import NodeContainer from './NodeContainer';.

Now to create the NodeContainer component. First we need to add the Axios library which we'll use to query the Drupal REST API, run npm install axios --save. Then create the file src/NodeContainer.js, and in there add the following code:
import React, { Component } from 'react'
import axios from 'axios'

class NodeContainer extends Component {
  constructor(props) {
    super(props)
    this.state = {
      nodes: []
    }
  }

  componentDidMount() {
    axios.get('http://example.com/api/nodes')
    .then(response => {
      this.setState({nodes: response.data})
    })
    .catch(error => console.log(error))
  }

  render() {
    return (
      <ul>
       {this.state.nodes.map((node) => {
          return(
           <li={idea.nid}>{node.title}</li>
          )
        })}
      </ul>
    )
  }

}

export default NodeContainer

At the top of the file React and Axios are both imported, the class for NodeContainer is then created. The constructor method is where we add the state node, componentDidMount() is called to get the nodes from the View /api/nodes, which then gets rendered as an unordered list.

To create the /api/nodes view install the core Rest module. This will allow you to create a "REST Export" view. Here the path can be set to /api/nodes, and you can select nid, and title.

As long as you left npm start running, you should be able to go back to your browser, and view a nice list of Drupal nodes being rendered in React.

Next, routing, to make these node titles clickable!

Categories: FLOSS Project Planets

Martin Fitzpatrick: Calculon

Planet Python - Mon, 2018-04-23 02:00

Calculators are one of the simplest desktop applications, found by default on every window system. Over time these have been extended to support scientific and programmer modes, but fundamentally they all work the same.

In this short write up we implement a working standard desktop calculator using PyQt. This implementation uses a 3-part logic — including a short stack, operator and state. Basic memory operations are also implemented.

While this is implemented for Qt, you could easily convert the logic to work in hardware with MicroPython or a Raspberry Pi.

The full source code for Calculon is available in the 15 minute apps repository. You can download/clone to get a working copy, then install requirements using:

pip3 install -r requirements.txt

You can then run Calculon with:

python3 notepad.py

Read on for a walkthrough of how the code works.

User interface

The user interface for Calculon was created in Qt Designer. The layout of the mainwindow uses a QVBoxLayout with the LCD display added to the top, and a QGridLayout to the bottom.

We use the grid layout is used to position all the buttons for the calculator. Each button takes a single space on the grid, except for the equals sign which is set to span two squares.

Each button is defined with a keyboard shortcut to trigger a .pressed signal — e.g. 3 for the 3 key. The actions for each button are defined in code and connected to this signal.

If you want to edit the design in Qt Designer, remember to regenerate the MainWindow.py file using pyuic5 mainwindow.ui -o MainWindow.py.

Actions

To make the buttons do something we need to connect them up to specific handlers. The connections defined are shown first below, and then the handlers covered in detail.

First we connect all the numeric buttons to the same handler. In Qt Designer we named all the buttons using a standard format, as pushButton_nX where X is the number. This makes it simple to iterate over each one and connect it up.

We use a function wrapper on the signal to send additional data with each trigger — in this case the number which was pressed.

for n in range(0, 10): getattr(self, 'pushButton_n%s' % n).pressed.connect(lambda v=n: self.input_number(v))

The next block of signals to connect are for standard calculator operations, including add, multiply, subtraction and divide. Again these are hooked up to the same slot, and consist of a wrapped signal to transmit the operation (an specific Python operator type).

self.pushButton_add.pressed.connect(lambda: self.operation(operator.add)) self.pushButton_sub.pressed.connect(lambda: self.operation(operator.sub)) self.pushButton_mul.pressed.connect(lambda: self.operation(operator.mul)) self.pushButton_div.pressed.connect(lambda: self.operation(operator.truediv)) # operator.div for Python2.7

In addition to the numbers and operators, we have a number of custom behaviours to wire up — percentage (to convert the previously typed number to a percentage amount), equals, reset and memory actions.

self.pushButton_pc.pressed.connect(self.operation_pc) self.pushButton_eq.pressed.connect(self.equals) self.pushButton_ac.pressed.connect(self.reset) self.pushButton_m.pressed.connect(self.memory_store) self.pushButton_mr.pressed.connect(self.memory_recall)

Now the buttons and actions are wired up, we can implement the logic in the slot methods for handling these events.

Operations

Calculator operations are handled using three components — the stack, the state and the current operation.

The stack

The stack is a short memory store of maximum 2 elements, which holds the numeric values with which we're currently calculating. When the user starts entering a new number it is added to the end of the stack (which, if the stack is empty, is also the beginning). Each numeric press multiplies the current stack end value by 10, and adds the value pressed.

def input_number(self, v): if self.state == READY: self.state = INPUT self.stack[-1] = v else: self.stack[-1] = self.stack[-1] * 10 + v self.display()

This has the effect of numbers filling from the right as expected, e.g.

Value pressed Calculation Stack 0 2 0 * 10 + 2 2 3 2 * 10 + 3 23 5 23 * 10 + 5 235 The state

A state flag, to toggle between ready and input states. This affects the behaviour while entering numbers. In ready mode, the value entered is set direct onto the stack at the current position. In input mode the above shift+add logic is used.

This is required so it is possible to type over a result of a calculation, rather than have new numbers added to the result of the previous calculation.

def input_number(self, v): if self.state == READY: self.state = INPUT self.stack[-1] = v else: self.stack[-1] = self.stack[-1] * 10 + v self.display()

You'll see switches between READY and INPUT states elsewhere in the code.

The current_op

The current_op variable stores the currently active operation, which will be applied when the user presses equals. If an operation is already in progress, we first calculate the result of that operation, pushing the result onto the stack, and then apply the new one.

Starting a new operation also pushes 0 onto the stack, making it now length 2, and switches to INPUT mode. This ensures any subsequent number input will start from zero.

def operation(self, op): if self.current_op: # Complete the current operation self.equals() self.stack.append(0) self.state = INPUT self.current_op = op

The operation handler for percentage calculation works a little differently. This instead operates directly on the current contents of the stack. Triggering the operation_pc takes the last value in the stack and divides it by 100.

def operation_pc(self): self.state = INPUT self.stack[-1] *= 0.01 self.display() Equals

The core of the calculator is the handler which actually does the maths. All operations (with the exception of percentage) are handled by the equals handler, which is triggered either by pressing the equals key, Enter or another operation key while an op is in progress.

The equals handler takes the current_op and applies it to the values in the stack (2 values, unpacked using *self.stack) to get the result. The result is put back in the stack as a single value, and we return to a READY state.

def equals(self): # Support to allow '=' to repeat previous operation # if no further input has been added. if self.state == READY and self.last_operation: s, self.current_op = self.last_operation self.stack.append(s) if self.current_op: self.last_operation = self.stack[-1], self.current_op self.stack = [self.current_op(*self.stack)] self.current_op = None self.state = READY self.display() # end::equals[]

Support has also been added for repeating previous operations by pressing the equals key again. This is done by storing the value and operator when equals is triggered, and re-using them if equals is pressed again without leaving READY mode (no user input).

Memory

Finally, we can define the handlers for the memory actions. For Calculon we've defined only two memory actions — store and recall. Store takes the current value from the LCD display, and copies it to self.memory. Recall takes the value in self.memory and puts in the final place on our stack.

def memory_store(self): self.memory = self.lcdNumber.value() def memory_recall(self): self.state = INPUT self.stack[-1] = self.memory self.display()

By setting the mode to INPUT and updating the display this behaviour is the same as for entering a number by hand.

Future ideas

The current implementation of Calculon only supports basic math operations. Most GUI desktop calculators also include support for scientific (and sometimes programmer) modes, which add a number or alternative functions.

In Calculon you could define these additional operations as a set of lambdas, which each accept the two parameters to operate on.

Switching modes (e.g. between normal and scientific) on the calculator will be tricky with the current QMainWindow-based layout. You may be able to rework the calculator layout in QtDesigner to use a QWidget base. Each view is just a widget, and switching modes can be performed by swapping out the central widget on your running main window.

Categories: FLOSS Project Planets

Pages