FLOSS Project Planets

Codementor: Let’s Write a Chat App in Python

Planet Python - Mon, 2017-12-04 06:00
Read this walkthrough for steps on how to create a fully functional chat app in Python.
Categories: FLOSS Project Planets

Amazee Labs: GraphQL for Drupalers - the fields

Planet Drupal - Mon, 2017-12-04 05:20
GraphQL for Drupalers - the fields

GraphQL is becoming more and more popular every day. Now that we have a beta release of the GraphQL module (mainly sponsored and developed by Amazee Labs) it's easy to turn Drupal into a first-class GraphQL server. In this series, we'll try to provide an overview of its features and see how they translate to Drupal.

Blazej Owczarczyk Mon, 12/04/2017 - 11:20

In the last post we covered the basic building blocks of GraphQL queries. We started with the naming conventions, then we took a look at how and when to use fragments. Finally, we moved on to aliases, which can be used to change names of the fields as well as to use the same field more than once in the same block. This week we'll delve into the ambiguous concept of Fields.

What exactly are GraphQL fields?

Fields are the most important of any GraphQL query. In the following query nodeById, title, entityOwner, and name are all fields.


Each GraphQL field needs to have a type that is stored in the schema. This means that it has to be known up front and cannot be dynamic. At the highest level, there are two types of values a field can return: a scalar and an object.


Scalar fields are leafs of any GraphQL query. They have no subfields and they return a concrete value, title and name in the above query are scalars. There are a few core scalar types in GraphQL, e.g.:

  • ​String: A UTF‐8 character sequence.
  • Int: A signed 32‐bit integer.
  • Float: A signed double-precision floating-point value.
  • Boolean: true or false.

If you're interested in how Drupal typed data is mapped to GraphQL scalars check out the the graphql_core.type_map service parameter and graphql_core.type_mapper service.

Complex types

Objects, like nodeById and entityOwner in the query above, are collections of fields. Each field that is not a scalar has to have at least one sub-field specified. The list of available sub-fields is defined by the object's type. If we paste the query above into graphiQL (/graphql/explorer), we'll see that the entityOwner field is of type User and name is one of User's subfields (of type String).


Fields can also have arguments. Each argument has a name and a type. In the example above nodeById field takes two arguments: id (String) and langcode. The same field can be requested more than once with a different set of arguments by using aliases, as we've seen in the last post.

How do Drupal fields become GraphQL fields?

One of the great new traits of Drupal 8 core is the typed data system. In fact, this is the feature that makes it possible for GraphQL to expose Drupal structures in a generic way. For the sake of improving the developer experience, especially the experience of the developers of decoupled frontends, Drupal fields have been divided into two groups.

Multi-property fields

The first group comprises all the field types that have more than one property. These fields are objects with all their properties as sub-fields.

This is how we'd retrieve values of a formatted text field (body) and a link field. Date fields and entity references also fall into this category. The latter have some unique features so let's check them out.

Entity reference fields

This type of field has 2 properties: the scalar target_id and the computed entity. This special property inherits its type from the entity that the field is pointing to. Actually, we've already seen that in the named fragments example in the last post, where fieldTags and fieldCategory were both term reference fields. Let's bring a simplified example.

Since fieldCategory links to terms, its entity property is of type TaxonomyTerm. We can go further.

The entityOwner property is of type User, so we get their email. Apparently, we can go as deep as the entity graph is. The following query is perfectly valid too.

It retrieves the title of an article that is related to the article that is related to the article with node id one and this is where GraphQL really shines. The query is relatively simple to read, it returns a simple-to-parse response and does it all in one request. Isn't it just beautiful? :)

Single-property fields

The second group comprises all the field types that have exactly one property (usually called value), like core plain text fields, email, phone, booleans and numbers. There's been a discussion about whether such fields should be rolled up to scalars or remain single-property objects. The former option won and in 3.x members of this group have no sub-fields.

That's it for the fields. Next week we're going to talk about... fields again :) but this time we'll see how to create one.

Categories: FLOSS Project Planets

Edward J. Yoon: AWS re:Invent 2017 참관기

Planet Apache - Mon, 2017-12-04 03:03

바로 지난 주, 나와 클라우드플랫폼팀장 제임스 동행으로 여기어때 R&D센터에서 AWS 2017 행사를 참관했다. 덴버/샌프란시스코와는 전혀 다른 라스베가스 스트립의 좌우로 초대형 호텔과 카지노가 즐비한 풍경에 한번 놀랬고, Registration Booth 부터 그 큰 규모에서 두번 놀랬다. 참고로 라스베가스는 이렇게 사람이 많이 모일수 있는 환경을 가지고 있어 대형 세미나가 지속적으로 열린다고 한다 (CES도 동시에 개최된 걸로 안다). 4만 명이 넘는 참가자가 라스베가스 전역에 있는 호텔 행사장에 모였고, 우버 드라이버들은 AWS 컨퍼런스 왔냐고 자연스레 안부 인사를 묻더라. 뭐 이미 잘 알려져 있듯이 한국에 많은 기업이 AWS를 사용하고 있다. 나는 참고로 re:Play 행사에서 오다가다 과거 네이버 동료도 거의 10년만에 만나기도 하였다. :-)

기술 세션들에 대한 리뷰는 모두 참석하지 못했기 때문에 내 눈이 정확하다 말할 순 없지만, 두 가지로 요약된다. Serverless 그리고 AI 다.

Serverless는 프로그래밍 패러다임을 송두리째 바꾸고 있으며, 더 이상 풀스택 엔지니어, 그들의 인프라 노하우와 경험적 스킬은 (희박한 오픈소스 해커 수준 아니고서야) 대우받기 힘든 세상이 되었다.

AI 분야도 마찬가지인데, 이미 학습능력이 클라우드 안으로 들어가기 시작했다.

  • 넷플릭스의 데이터 엔지니어로 유명한 배재현님의 글도 읽어볼 필요가 있음  https://www.facebook.com/jaehb/posts/10214839335118339?pnref=story 

 내 눈에 들어온 슬라이드 한장 … 내가 가장 열성적으로 들었던 딥 러닝 서밋은 장장 4시간 연짱 세션이었고, 그중에 눈에 너무 꽃힌 장표 한장이 있으니 … 주로 내가 많이 얘기하던 내용과 비슷한 면이 있어 그럴수도 있는데, 사용자 경험은 OS에서 웹으로, 웹에서 모바일, 그리고 이제 AI로 넘어가는 과정이라고 볼 수 있다. 우리 IT 기술은 현재 어디에 머물고 있는가? :-)

 여러가지 생각들을 뒤로하고 귀국 바로 전 그랜드 캐년에 들러 광활한 세상을 느끼고 그렇게 한국에 돌아왔다 …
Categories: FLOSS Project Planets

ADCI Solutions: How we built the Symfony CRM for Oxford Business Group

Planet Drupal - Mon, 2017-12-04 01:35

As you know, now the Symfony components are in Drupal Core, and this encourages our team to get to know this framework better. In this article, we will tell you how a CRM system on Symfony may look like, what goals it reaches, and what features includes. This article is written on the base of our real project for Oxford Business Group - a global publisher and consultancy that has offices all around the globe.


Learn about the Symfony CRM





Categories: FLOSS Project Planets

ADCI Solutions: How we built the Symfony CRM for Oxford Business Group

Planet Drupal - Mon, 2017-12-04 01:35

As you know, now the Symfony components are in Drupal Core, and this encourages our team to get to know this framework better. In this article, we will tell you how a CRM system on Symfony may look like, what goals it reaches, and what features includes. This article is written on the base of our real project for Oxford Business Group - a global publisher and consultancy that has offices all around the globe.


Learn about the Symfony CRM





Categories: FLOSS Project Planets

Fabio Zadrozny: Creating extension to profile Python with PyVmMonitor from Visual Studio Code

Planet Python - Mon, 2017-12-04 01:20
Ok, so, the target here is doing a simple extension with Visual Studio Code which will help in profiling the current module using PyVmMonitor (http://www.pyvmmonitor.com/).

The extension will provide a command which should open a few options for the user on how he wants to do the profile (with yappi, cProfile or start without profiling but connected with the live sampling view).

I went with https://code.visualstudio.com/docs/extensions/yocode to bootstrap the extension, which gives a template with a command to run then renamed a bunch of things (such as the extension name, description, command name, etc).

Next was finding a way to ask the user for the options (to ask how the profile should be started). Searching for it revealed https://tstringer.github.io/nodejs/javascript/vscode/2015/12/14/input-and-output-vscode-ext.html, so, I went with creating the needed constants and going with vscode.window.showQuickPick (experimenting, undefined is returned if the user cancel the action, so, that needs to be taken into account too).

Now, after the user chooses how to start PyVmMonitor, the idea would be making any launch actually start in the chosen profile mode (which is how it works in PyDev).

After investigating a bit, I couldn't find out how to intercept an existing launch to modify the command line to add the needed parameters for profiling with PyVmMonitor, so, this integration will be a bit more limited than the one in PyDev as it will simply create a new terminal and call PyVmMonitor asking it to profile the currently opened module...

In the other integrations, it was done as a setting where the user selected that it wanted to profile any python launch from a given point onward as a toggle and then intercepted launches changing the command line given, so, for instance, it could intercept a unittest launch too, but in this case, it seems that there's currently no way to do that -- or some ineptitude on my part finding an actual API to do it ;)

Now, searching on the VSCode Python plugin, I found a "function execInTerminal", so, I based the launching in it (but not using its settings as I don't want to add a dependency on it for now, so, I just call `python` -- if that's wrong, as it opens a shell, the user is free to cancel that and correct the command line to use the appropriate python interpreter or change it as needed later on).

Ok, wrapping up: put the initial version of the code on https://github.com/fabioz/vscode-pyvmmonitor. Following https://code.visualstudio.com/docs/extensions/publish-extension did work out, so, there's a "Profile Python with PyVmMonitor" extension now ;).

Some notes I took during the process related to things I stumbled on or found awkard:
  • After publishing the first time and installing, the extension wasn't working because I wrongly put a dependency from npm in "devDependencies" and not in "dependencies" (the console in the developer tools helped in finding out that the dependency wasn't being loaded after the extension was installed).
  • When a dependency is added/removed, npm install needs to be called again, it's not automatic.
  • When uploading the extension I had the (common) error of not generating a token for "all" ;)
  • Apparently there's a "String" and a "string" in TypeScript (or at least within the dependencies when editing VSCode).
  • The whole launching on VSCode seems a bit limited/ad-hoc right now (for instance, .launch files create actual launchers which can be debugged but the python extension completely bypasses that by doing a launch in terminal for the current file -- probably because it's a bit of a pain creating that .launch file) -- I guess this reflects how young VSCode is... on the other hand, it really seems it built upon previous experience as the commands and bindings seems to have evolved directly to a good design (Eclipse painfully iterated over several designs on its command API).
  • Extensions seem to be limited by design. I guess this is good and bad at the same time... good that extensions should never slow down the editor, but bad because they are not able to do something which is not in the platform itself to start with -- for instance, I really missed a good unittest UI when using it... there are actually many other things I missed from PyDev, although I guess this is probably a reflect on how young the platform is (and it does seem to be shaping up fast).
Categories: FLOSS Project Planets

Hook 42: November Accessibility (A11Y) Talks

Planet Drupal - Sun, 2017-12-03 19:57

This month we did something a little bit different with the meet-up format. Instead of one person presenting a slide deck, we had a panel discussion on all things accessibility with four accessibility experts - Eric Bailey, Helena McCabe, Scott O'Hara, and Scott Vinkle!

There were some questions lined up to keep the conversation going, but we ended up having some amazing on-the-fly questions from the audience, so it was a bit more spontaneous and a lot of fun!

Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: In the beginning, Drupal had an ambition

Planet Drupal - Sun, 2017-12-03 19:41
Since Dries' keynote at the DrupalCon in Vienna how Drupal is for ambitious digital experiences, it became somehow more obvious how does its founder see the future. And what the agencies should focus on in a more than ever competing world of content management systems. Although we have to admit that some of the agencies already embraced the idea of building their businesses on delivering such digital experiences.  So what does differentiate ambitious digital experiences from, let's say, websites? Or "just plain" digital experiences? And what qualities an ambitious digital experience has? It… READ MORE
Categories: FLOSS Project Planets

Hynek Schlawack: Python Application Deployment with Native Packages

Planet Python - Sun, 2017-12-03 19:00

Speed, reproducibility, easy rollbacks, and predictability is what we strive for when deploying our diverse Python applications. And that’s what we achieved by leveraging virtual environments and Linux system packages.

Categories: FLOSS Project Planets

Jeremy Epstein: A lightweight per-transaction Python function queue for Flask

Planet Python - Sun, 2017-12-03 17:25

The premise: each time a certain API method is called within a Flask / SQLAlchemy app (a method that primarily involves saving something to the database), send various notifications, e.g. log to the standard logger, and send an email to site admins. However, the way the API works, is that several different methods can be forced to run in a single DB transaction, by specifying that SQLAlchemy only perform a commit when the last method is called. Ideally, no notifications should actually get triggered until the DB transaction has been successfully committed; and when the commit has finished, the notifications should trigger in the order that the API methods were called.

There are various possible solutions that can accomplish this, for example: a celery task queue, an event scheduler, and a synchronised / threaded queue. However, those are all fairly heavy solutions to this problem, because we only need a queue that runs inside one thread, and that lives for the duration of a single DB transaction (and therefore also only for a single request).

To solve this problem, I implemented a very lightweight function queue, where each queue is a deque instance, that lives inside flask.g, and that is therefore available for the duration of a given request context (or app context).

Categories: FLOSS Project Planets

Vladimir Iakolev: Loading/progress indicator for shell with aging emojis

Planet Python - Sun, 2017-12-03 13:00

Recently, while waiting for a long-running script to finish, I thought that it would be nice to have some sort of loader with aging emojis. TLDR: we-are-waiting.

The “life” of an emoji is simple:

Categories: FLOSS Project Planets

Raphaël Hertzog: My Free Software Activities in November 2017

Planet Debian - Sun, 2017-12-03 12:52

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (16 commits to the security tracker).

I prepared and released DLA-1171-1 on libxml-libxml-perl.

I prepared a new update for simplesamlphp (1.9.2-1+deb7u1) fixing 6 CVE. I did not release any DLA yet since I was not able to test the updated package yet. I’m hoping that the the current maintainer can do it since he wanted to work on the update a few months ago.

Distro Tracker

Distro Tracker has seen a high level of activity in the last month. Ville Skyttä continued to contribute a few patches, he helped notably to get rid of the last blocker for a switch to Python 3.

I then worked with DSA to get the production instance (tracker.debian.org) upgraded to stretch with Python 3.5 and Django 1.11. This resulted in a few regressions related to the Python 3 switch (despite the large number of unit tests) that I had to fix.

In parallel Pierre-Elliott Bécue showed up on the debian-qa mailing list and he started to contribute. I have been exchanging with him almost daily on IRC to help him improve his patches. He has been very responsive and I’m looking forward to continue to cooperate with him. His first patch enabled the use “src:” and “bin:” prefix in the search feature to specify if we want to lookup among source packages or binary packages.

I did some cleanup/refactoring work after the switch of the codebase to Python 3 only.

Misc Debian work

Sponsorship. I sponsored many new packages: python-envparse 0.2.0-1, python-exotel 0.1.5-1, python-aws-requests-auth 0.4.1-1, pystaticconfiguration 0.10.3-1, python-jira 1.0.10-1, python-twilio 6.8.2-1, python-stomp 4.1.19-1. All those are dependencies for elastalert 0.1.21-1 that I also sponsored.

I sponsored updates for vboot-utils 0~R63-10032.B-2 (new upstream release for openssl 1.1 compat), aircrack-ng 1:1.2-0~rc4-4 (introducing airgraph-ng package) and asciidoc 8.6.10-2 (last upstream release, tool is deprecated).

Debian Installer. I submitted a few patches a while ago to support finding ISO images in LVM logical volumes in the hd-media installation method. Colin Watson reviewed them and made a few suggestions and expressed a few concerns. I improved my patches to take into account his suggestions and I resolved all the problems he pointed out. I then committed everything to the respective git repositories (for details review #868848, #868859, #868900, #868852).

Live Build. I merged 3 patches for live-build (#879169, #881941, #878430).

Misc. I uploaded Django 1.11.7 to stretch-backports. I filed an upstream bug on zim for #881464.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: FLOSS Project Planets

Syntax Highlighting Checker

Planet KDE - Sun, 2017-12-03 12:20

The KTextEditor Framework uses the syntax highlighting files provided by the KSyntaxHighlighting Framework since the  KDE Frameworks release 5.28.

The KSyntaxHighlighting Framework implements Kate’s highlighting system and meanwhile is used in quite some applications (e.g. LabPlot, KDE PIM). What is quite nice is that the KSyntaxHighlighting framework is nicely unit tested. And while we do not have tests for all highlighting files, we still provide some quality assurance through a compile time checker.

How does it work? Well – in former times, Kate loaded all highlighting .xml files from disk (through the KTextEditor framework). This lead to a slow startup over time, since there are >250 .xml files that needed a stat system call at startup.

With the KSyntaxHighlighting Framework, all these xml files are compiled into a Qt resource (qrc file), that then is included into the KSyntaxHighlighting library.

In order to create the Qt resource file, we need to iterate over all available xml files anyways. So what happens is that we take this opportunity and also scan the highlighting files for common mistakes.

As of today, we are checking the following:

  1. RegExpr: A warning is raised, if a regular expression has syntax errors.
  2. DetectChars: A warning is raised, if the char=”x” attribute contains more or less than one character, e.g. when char=”xyz”, or char=”\\” (no escaping required), or similar.
  3. Detect2Chars: Same as DetectChars, just for char=”x” and char1=”y”.
  4. Keyword lists: A warning is raised, if a keyword entry contains leading or trailing spaces. Additional trimming just takes time.
  5. Keyword lists: A warning is raised if a keyword list is unused.
  6. Keyword lists: A warning is raised if multiple keyword lists use the same me (=identifier).
  7. Keyword lists: A warning is raised if a non-existing keyword list is used.
  8. Contexts: A warning is raised, if a non-existing context is referenced.
  9. Contexts: A warning is raised, if a context is unused.
  10. Contexts: A warning is raised, if multiple contexts have the same name (identifier clash).
  11. Attributes: A warning is raised, if non-existing itemData is used.
  12. Attributes: A warning is raised, if multiple itemDatas use the same name (identifier clash).
  13. Attributes: A warning is raised, if an itemData is unused.

This list helps us nicely to catch many mistakes at compile time even before running unit tests.

Update (2017-12-17): All above issues are fixed for all highlighting files starting with the KSyntaxHighlighting 5.42 framework, to be released in January 2018.

Categories: FLOSS Project Planets

Ned Batchelder: Bite-sized command line tools: pylintdb

Planet Python - Sun, 2017-12-03 09:55

One of the things I love about Python is the abundance of handy libraries to cobble together small but useful tools. At work we had a large pylint report, and I wanted to understand it better. In particular, I wanted to trace back to which commit had introduced the violations. I wrote pylintdb.py to do the work.

Since we had a lot of violations (>5000!) I figured it would take some time to use git blame to find the commit for each line. I wanted a way to persist the progress through the lines. SQLite seemed like a good choice. It also would give me ad-hoc queryability, though to be honest, I didn’t even consider that at the time.

SQLite is part of the Python standard library, but there’s a third-party library that makes it super-convenient to use. Dataset lets you use a database without creating a schema or even model first. You just open a database, choose a table name, and then start writing dictionaries to it. It handles all the schema creation (or modification!) behind the scenes. Awesome.

These days, click is the tool of choice for command-line parsing, and other chores needed in the terminal. I used the progress bar functions. They aren’t perfect, but in only a few lines I had a workable indicator.

Other useful things from the Python standard library:

  • concurrent.futures for parallelizing the git blame work. It’s got a high-level “map” interface that did exactly what I needed without having to think about queues, threads, and so on.
  • subprocess.check_output does the subprocess thing people usually want: just run the command and give me the output.

pylintdb isn’t earth-shattering, it just does exactly what I needed in 120 lines with a minimum of fuss, thanks to dataset, click, and Python.

Categories: FLOSS Project Planets

GNUnet News: The GNUnet System

GNU Planet! - Sun, 2017-12-03 09:52
Grothoff C. The GNUnet System. Informatique [Internet]. 2017 ;HDR:181. Available from: https://grothoff.org/christian/habil.pdf
Categories: FLOSS Project Planets

Import Python: #154 - Awesome Python Talks, Object Models, Free Apache Spark Guide and more

Planet Python - Sun, 2017-12-03 01:58
Worthy Read
Continuous Delivery: GoCD VS Spinnaker GoCD or Spinnaker? This post is an overview of GoCD and Spinnaker, why they are different from each other and which problems you should use them to solve.
Awesome python talks An opinionated list of awesome videos related to Python, with a focus on training and gaining hands-on experience.
Object models Here, then, is a (very) brief run through the inner workings of objects in four very dynamic languages. I don’t think I really appreciated objects until I’d spent some time with Python, and I hope this can help someone else whet their own appetite.
core-python, oops
Python for Data Analysis — A Critical Line-by-Line Review A not so nice review of Python for Data Analysis Book
Python Software Foundation News: The PSF awarded $170,000 grant from Mozilla Open Source Program to improve sustainability of PyPI Today we are excited to announce that we have applied for, and been awarded, a grant to help improve the sustainability of the Python Package Index in the amount of $170,000.  This has been awarded by Mozilla, through the Foundational Technology track of their Open Source Support Program.  We would like to thank Mozilla for their support.
PSF, mozilla
Free Apache Spark™ Guide The Definitive Guide to Apache Spark. Download today!
ebook, advert, free
Using Python’s Pathlib Module - Practical Business Python The pathlib module was first included in python 3.4 and has been enhanced in each of the subsequent releases. Pathlib is an object oriented interface to the filesystem and provides a more intuitive method to interact with the filesystem in a platform agnostic and pythonic manner. I recently had a small project where I decided to use pathlib combined with pandas to sort and manage thousands of files in a nested directory structure. Once it all clicked, I really appreciated the capabilities that pathlib provided and will definitely use it in projects going forward. That project is the inspiration for this post.
Using NLTK to visualize my favorite albums’ lyrics A few weeks ago I was enrolled in Python for Data Science by UCSD on EdX.org. It is an introductory course so it starts with the basics but by the end of it you have worked with Twitter’s API, predicted weather using Machine Learning and even done some Natural Language Processing using NLTK.
Hellosign Embed docs directly on your website with a few lines of code. Test the API for free.
python cheat sheet core-python
pypika PyPika is a Python API for building SQL queries. The motivation behind PyPika is to provide a simple interface for building SQL queries without limiting the flexibility of handwritten SQL. Designed with data analysis in mind, PyPika leverages the builder design pattern to construct queries to avoid messy string formatting and concatenation. It is also easily extended to take full advantage of specific features of SQL database vendors.

Python Backend Developer at TXODDS Europe TXODDS are a dynamic player in the Sports data business looking to grow aggressively in the next 18 months. We are currently seeking a Python Backend Developer to join a team responsible for the operation, development and maintenance of our current and future systems.

deep-image-prior - 856 Stars, 70 Fork Image restoration with neural networks but without learning.
the-endorser - 77 Stars, 12 Fork An OSINT tool that allows you to draw out relationships between people on LinkedIn via endorsements/skills.
WPSploit - 36 Stars, 9 Fork WordPress Plugin Security Testing.
progrmoiz/python-snippets - 17 Stars, 0 Fork The most useful python snippets.
selectolax - 12 Stars, 0 Fork Python bindings to Modest engine (fast HTML5 parser with CSS selectors).
speeed - 11 Stars, 0 Fork Ping like tool that measures packet speed instead of response time.
pytudes - 4 Stars, 0 Fork Python programs to practice or demonstrate skills.
datasette - 0 Stars, 0 Fork An instant JSON API for your SQLite databases.
cidr-house-rules - 0 Stars, 0 Fork A lightweight API and collection system to centralize important AWS resource information across multiple accounts in near-realtime.
fireant - 0 Stars, 0 Fork fireant is a a data analysis tool used for quickly building charts, tables, reports, and dashboards. It defines a schema for configuring metrics and dimensions which removes most of the leg work of writing queries and formatting charts. fireant even works great with Jupyter notebooks and in the Python shell providing quick and easy access to your data.
Categories: FLOSS Project Planets

Noah Meyerhans: On the demise of Linux Journal

Planet Debian - Sat, 2017-12-02 21:54

Lwn, Slashdot, and many others have marked the recent announcement of Linux Journal's demise. I'll take this opportunity to share some of my thoughts, and to thank the publication and its many contributors for their work over the years.

I think it's probably hard for younger people to imagine what the Linux world was like 20 years ago. Today, it's really not an exaggeration to say that the Internet as we know it wouldn't exist at all without Linux. Almost every major Internet company you can think of runs almost completely on Linux. Amazon, Google, Facebook, Twitter, etc, etc. All Linux. In 1997, though, the idea of running a production workload on Linux was pretty far out there.

I was in college in the late 90's, and worked for a time at a small Cambridge, Massachusetts software company. The company wrote a pretty fancy (and expensive!) GUI builder targeting big expensive commercial UNIX platforms like Solaris, HP/UX, SGI IRIX, and others. At one point a customer inquired about the availability of our software on Linux, and I, as an enthusiastic young student, got really excited about the idea. The company really had no plans to support Linux, though. I'll never forget the look of disbelief on a company exec's face as he asked "$3000 on a Linux system?"

Throughout this period, on my lunch breaks from work, I'd swing by the now defunct Quantum Books. One of the monthly treats was a new issue of Linux Journal on the periodicals shelf. In these issues, I learned that more forward thinking companies actually were using Linux to do real work. An article entitled "Linux Sinks the Titanic" described how Hollywood deployed hundreds(!) of Linux systems running custom software to generate the special effects for the 1997 movie Titanic. Other articles documented how Linux was making inroads at NASA and in the broader scientific community. Even the ads were interesting, as they showed increasing commercial interest in Linux, both on the hardware (HyperMicro, VA Research, Linux Hardware Solutions, etc) and software (CDE, Xi Graphics) fronts.

The software world is very different now than it was in 1997. The media world is different, too. Not only is Linux well established, it's pretty much the dominant OS on the planet. When Linux Journal reported in the late 90's that Linux was being used for some new project, that was news. When they documented how to set up a Linux system to control some new piece of hardware or run some network service, you could bet that they filled a gap that nobody else was working on. Today, it's no longer news that a successful company is using Linux in production. Nor is it surprising that you can run Linux on a small embedded system; in fact it's quite likely that the system shipped with Linux pre-installed. On the media side, it used to be valuable to have everything bundled in a glossy, professionally produced archive published on a regular basis. Today, at least in the Linux/free software sphere, that's less important. Individual publication is easy on the Internet today, and search engines are very good at ensuring that the best content is the most discoverable content. The whole Internet is basically one giant continuously published magazine.

It's been a long time since I paid attention to Linux Journal, so from a practical point of view I can't honestly say that I'll miss it. I appreciate the role it played in my growth, but there are so many options for young people today entering the Linux/free software communities that it appears that the role is no longer needed. Still, the termination of this magazine is a permanent thing, and I can't help but worry that there's somebody out there who might thrive in the free software community if only they had the right door open before them.

Categories: FLOSS Project Planets

Thomas Goirand: There’s cloud, and it can even be YOURS on YOUR computer

Planet Debian - Sat, 2017-12-02 17:09

Each time I see the FSFE picture, just like on Daniel’s last post to planet.d.o, where it says:

“There is NO CLOUD, just other people’s computers”

it makes me so frustrated. There’s such a thing as private cloud, setup on your own set of servers. I’ve been working on delivering OpenStack to Debian for the last 6 years and a half, motivated exactly to fix this issue: I refuse that the only cloud people could use would be a closed source solution like GCE, AWS or Azure. The FSFE (and the FSF) completely dismissing this work is more than annoying: it is counter productive. Not only the FSFE shouldn’t pull anyone away from the cloud, but it should push for the public to choose cloud providers using free software like OpenStack.

The openstack.org market place lists 23 public cloud providers using OpenStack, so there is now no excuse to use any other type of cloud: for sure, there’s one where you need it. If you use a free software solution like OpenStack, then the question if you’re running on your own hardware, on some rented hardware (on which you deployed OpenStack yourself), or on someone else’s OpenStack deployment is just a practical one, on which you can always back-up quickly. That’s one of the very reason why one should deploy on the cloud: so that it’s possible to redeploy quickly on another cloud provider, or even on your own private cloud. This gives you more freedom than you ever had, because it makes you not dependent anymore on the hosting company you’ve selected: switching provider is just the mater of launching a script. The reality is that neither the FSFE or RMS understand all of this. Please don’t dive into the FSFE very wrong message.

Categories: FLOSS Project Planets

Steve Kemp: BlogSpam.net repository cleanup, and email-changes.

Planet Debian - Sat, 2017-12-02 17:00

I've shuffled around all the repositories which are associated with the blogspam service, such that they're all in the same place and refer to each other correctly:

Otherwise I've done a bit of tidying up on virtual machines, and I'm just about to drop the use of qpsmtpd for handling my email. I've used the (perl-based) qpsmtpd project for many years, and documented how my system works in a "book":

I'll be switching to pure exim4-based setup later today, and we'll see what that does. So far today I've received over five thousand spam emails:

steve@ssh /spam/today $ find . -type f | wc -l 5731

Looking more closely though over half of these rejections are "dictionary attacks", so they're not SPAM I'd see if I dropped the qpsmtpd-layer. Here's a sample log entry (for a mail that was both rejected at SMTP-time by qpsmtpd and archived to disc in case of error):

{"from":"<clzzgiadqb@ics.uci.edu>", "helo":"adrian-monk-v3.ics.uci.edu", "reason":"Mail for juha not accepted at steve.fi", "filename":"1512284907.P26574M119173Q0.ssh.steve.org.uk.steve.fi", "subject":"Viagra Professional. Beyond compare. Buy at our shop.", "ip":"2a00:6d40:60:814e::1", "message-id":"<p65NxDXNOo1b.cdD3s73osVDDQ@ics.uci.edu>", "recipient":"juha@steve.fi", "host":"Unknown"}

I suspect that with procmail piping to crm114, and a beefed up spam-checking configuration for exim4 I'll not see a significant difference and I'll have removed something non-standard. For what it is worth over 75% of the remaining junk which was rejected at SMTP-time has been rejected via DNS-blacklists. So again exim4 will take care of that for me.

If it turns out that I'm getting inundated with junk-mail I'll revert this, but I suspect that it'll all be fine.

Categories: FLOSS Project Planets