FLOSS Project Planets

Weekly Report 4

Planet KDE - Mon, 2020-07-06 20:00
GSoC Week 5 - Qt3D based backend for KStars

In the fifth week of GSoC, I worked on adding stars in the 3d Painter backend along with grids. Code for Qt3D is shifted to a new Skymap3D now.

What’s done this week
  • Shaders for instancing all kinds of stars(labelled, unlabelled) with all projection modes.

  • A new abstract camera controller.

The Challenges
  • Integration issues with the original SkyPainter API written to support multiple backends.

  • Use of new custom camera controller and view matrix..

  • Zooming and focus for deep star objects.

What remains

My priorities for the next week include.

  • Adding Skymap events to Skymap3D.

  • Debug deep star objects and get started with other sky objects.


The Code
Categories: FLOSS Project Planets

GNUnet News: GNUnet 0.13.0

GNU Planet! - Mon, 2020-07-06 18:00

GNUnet 0.13.0 released

We are pleased to announce the release of GNUnet 0.13.0.
This is a new major release. It breaks protocol compatibility with the 0.12.x versions. Please be aware that Git master is thus henceforth INCOMPATIBLE with the 0.12.x GNUnet network, and interactions between old and new peers will result in signature verification failures. 0.12.x peers will NOT be able to communicate with Git master or 0.13.x peers.
In terms of usability, users should be aware that there are still a large number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.13.0 release is still only suitable for early adopters with some reasonable pain tolerance.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.13.0 (since 0.12.2)
  • GNS:
    • Aligned with specification LSD001.
    • NSS plugin "block" fixed. #5782
    • Broken set NICK API removed.#6092
    • New record flags: SUPPLEMENTAL. Records which are not explicitly configured/published under a specific label but which are still informational are returned by the resolver and flagged accordingly. #6103
    • gnunet-namestore now complains when adding TLSA or SRV records outside of a BOX
  • CADET: Fixed tunnel establishment as well as an outstanding bug regarding tunnel destruction. #5822
  • GNS/REVOCATION: Revocation proof of work has function changed to argon2 and modified to reduce variance.
  • RECLAIM: Increased ticket length to 256 bit. #6047
  • TRANSPORT: UDP plugin moved to experimental as it is known to be unstable.
  • UTIL:
    • Serialization / file format of ECDSA private keys harmonized with other libraries. Old private keys will no longer work! #6070
    • Now using libsodium for EC cryptography.
    • Builds against cURL which is not linked against gnutls are now possible but still not recommended. Configure will warn that this will impede the GNS functionality. This change will make hostlist discovery work more reliable for some distributions.
    • GNUNET_free_non_null removed. GNUNET_free changed to not assert that the pointer is not NULL. For reference see the Taler security audit.
    • AGPL request handlers added GNUnet and extension templates.
  • (NEW) GANA Registry: We have established a registry to be used for names and numbers in GNUnet. This includes constants for protocols including GNS record types and GNUnet peer-to-peer messages. See GANA.
  • (NEW) Living Standards: LSD subdomain and LSD0001 website: LSD0001
  • (NEW) Continuous integration: Buildbot is back.
  • Buildsystem: A significant number of build system changes:
    • libmicrohttpd and libjansson are now required dependencies.
    • New dependency: libsodium.
    • Fixed an issue with libidn(2) detection.
A detailed list of changes can be found in the ChangeLog andthe 0.13.0 bugtracker.Known Issues
  • There are known major design issues in the TRANSPORT, ATS and CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.
  • Some high-level tests in the test-suite fail non-deterministically due to the low-level TRANSPORT issues.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.


This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, Florian Dold, Jonathan Buchanan, t3sserakt, nikita and Martin Schanzenbach.

Categories: FLOSS Project Planets

Codementor: Understanding Virtual Environments in Python

Planet Python - Mon, 2020-07-06 16:37
Introduction to the concept of virtual environments in Python. Useful for a developer working on multiple projects on a single server.
Categories: FLOSS Project Planets

PSF GSoC students blogs: Blog post for week 5: Polishing

Planet Python - Mon, 2020-07-06 15:56

Last week was another week of code and documentation polishing. Originally I planned to implement duplicate filtering with external data sources, however, I already did that in week 2 when I evaluated the possibility of disk-less external queues (see pull request #2).

One of the easier changes was to change the hostname/port/database settings triple to one Redis URL. redis-py supports initializing a client instance from an URL, e. g. redis://[[username]:[password]]@localhost:6379/0. The biggest advantage of the URL is its flexibility. It allows the user to optionally specify username and password, hostname, port, database name and even certain settings. The Redis URL scheme is also specified at https://www.iana.org/assignments/uri-schemes/prov/redis.

While working on the URL refactoring, we also noticed a subtle bug: A spider can have its own settings and hence it's possible for different spiders to use different Redis instances. My implementation was reusing an existing Redis connection but didn't account for spiders having different settings. The fix was easy: The client object is now cached in a dict and indexed by the Redis URL. This way, the object is only reused if the URL matches.

Another thing that kept me busy last week was detecting and handling various errors. If a user configures an unreachable hostname or wrong credentials, Scrapy should fail early and not in the middle of the crawl. The difficulty was that depending on if a new crawl is done or a previous crawl is picked up, queues would be created lazily (new crawl) or eagerly (continued crawl). To unify the behavior, I introduced a queue self-check which not only works for Redis but for all queues (i. e. also plain old disk-based queues). The idea is that upon initialization of a priority queue, it pushes a fake Request object, pops it again from the queue and compares the fingerprints. If the fingerprints match and if no exception was raised at this point, the self-check succeeded.

The code for this self-check looks as follows:

def selfcheck(self):
    # Find an empty/unused queue.
    while True:
        # Use random priority to not interfere with existing queues.
        priority = random.randrange(2**64)
        q = self.qfactory(priority)
        if not q:  # Queue is empty

    self.queues[priority] = q
    self.curprio = priority
    req1 = Request('http://hostname.invalid', priority=priority)
    req2 = self.pop()
    if request_fingerprint(req1) != request_fingerprint(req2):
        raise ValueError(
            "Pushed request %s with priority %d but popped different request %s."
            % (req1, priority, req2)

    if q:
        raise ValueError(
            "Queue with priority %d should be empty after selfcheck!" % priority

The code seems a bit complicated with the random call which might need additional explanation. The difficulty for the self-check is that a queue is basically identified by its priority and if queue with a given priority already exists it will be reused. This means that if we used a static priority we could pick up an existing queue and push and pop to it. This is actually not really a problem for FIFO queues where the last element that is pushed is also popped. But for LIFO queues that are not empty this means that an arbitrary element is popped (and not the request that we pushed). The solution for this problem is to generate a random priority, get a queue for that priority and only use it if it is empty. Otherwise, generate a new priority randomly. Due to the large range for the priority (0..2**64-1) it is extremely unlikely that a queue with that priority already exists but even if it does, the loop makes sure that another priority is generated.

For this week, I will do another feedback iteration with my mentors and prepare for the topic of next week: Implementing distributed crawling using common message queues.

Categories: FLOSS Project Planets

PSF GSoC students blogs: I'm Not Drowning On My Own

Planet Python - Mon, 2020-07-06 15:09
Cold Water

Hello there! My schoolyear is coming to an end, with some final assignments and group projects left to be done. I for sure underestimated the workload of these and in the last (and probably next) few days I'm drowning in work trying to meet my deadlines.

One project that might be remotely relevant is cheese-shop, which tries to manage the metadata of packages from the real Cheese Shop. Other than that, schoolwork is draining a lot of my time and I can't remember the last time I came up with something new for my GSoC Project )-;

Warm Water

On the bright side, I received a lot of help and encouragement from contributors and stakeholders of pip. In the last week alone, I had five pull requests merged:

  • GH-8332: Add license requirement to _vendor/README.rst
  • GH-8320: Add utilities for parallelization
  • GH-8504: Parallelize pip list --outdated and --uptodate
  • GH-8411: Refactor operations.prepare.prepare_linked_requirement
  • GH-8467: Add utitlity to lazily acquire wheel metadata over HTTP

In addition to helping me getting my PRs merged, my mentor Pradyun Gedam also gave me my first official feedback, including what I'm doing right (and wrong too!) and what I should keep doing to increase the chance of the project being successful.

GH-7819's roadmap (Danny McClanahan's discoveries and works on lazy wheels) is being closely tracked by hatch's maintainter Ofek Lev, which really makes me proud and warms my heart, that what I'm helping build is actually needed by the community!

Learning How To Swim

With GH-8467 and GH-8530 merged, I'm now working on GH-8532 which aims to roll out the lazy wheel as the way to obtain dependency information via the CLI flag --use-feature=lazy-wheel.

GH-8532 was failing initially, despite being relatively trivial and that the commit it used to base on was passing. Surprisingly, after rebasing it on top of GH-8530, it suddenly became green mysteriously. After the first (early) review, I was able to iterate on my earlier code, which used the ambiguous exception RuntimeError.

The rest to be done is just adding some functional tests (I'm pretty sure this will be either overwhelming or underwhelming) to make sure that the command-line flag is working correctly. Hopefully this can make it into the beta of the upcoming release this month.

In other news, I've also submitted a patch improving the tests for the parallelization utilities, which was really messy as I wrote them. Better late than never!

Metaphors aside, I actually can't swim d-:

Dive Plan

After GH-8532, I think I'll try to parallelize downloads of wheels that are lazily fetched only for metadata. By the current implementation of the new resolver, for pip install, this can be injected directly between the resolution and build/installation process.

Categories: FLOSS Project Planets

RMOTR: Can Anybody Become a Data Scientist?

Planet Python - Mon, 2020-07-06 14:41
Photo by Free To Use Sounds on Unsplash

Hi. My name is Lorelei, I’m a writer, and I know absolutely nothing about coding.

Well, that’s not entirely true. I’ve been writing for tech companies for a few years, so naturally one picks up on a thing or two. And my husband is a talented engineer whom I’d love to refer to as the Shakespeare of Coding (but he would be embarrassed if I did). Regardless, if I was told to write a line of code or perish, or even to describe the differences between, say, Python and Java, I would willingly accept death.

That ends today.

Turns out, there’s a lot more money to be had in the Data Science field and this writerly, hobo existence is tiring. Instead of writing about RMOTR’s courses, I’m going to take them.

…and then write about taking them. (Writing is truly an affliction).


To be honest, I genuinely don’t know.

I expect to be challenged, frustrated, confused, and often times afraid.


I also expect to learn something valuable, be it actual coding or a genuine understanding of what my peers do, allowing me to better support them.

The most exciting prospect of navigating this field is discovering a new way to create and identifying the real possibilities that come with programming.

I’m very glad there will be plenty of projects and interactive tasks to complete. Listening is easy. Applying is hard. But, that’s where the real learning takes place. The more I can get my hands dirty practicing these new skills, the better.

That should also lead to lots of opportunities to celebrate, both victories and lessons learned. I’ve got my sticker chart of accomplishments ready to go!

I do anticipate crying at least twice. (More than twice). Tears of joy and enlightenment? I sure hope so.

https://medium.com/media/f83ac3f721e575f9470123057173b00d/hrefThe Course

Introduction to Programming with Python is my first stop on this journey. RMOTR co-founder Santiago Basulto leads this course and, boy, does he cover a lot.

  • Python Versions
  • Intro to Data Types
  • Intro to Operators
  • Arithmetic Operators
  • Assignment Operators
  • Comparison Operators
  • Logical Operators
  • Inverting Booleans
  • Intro to Functions
  • Function Arguments
  • Intro to Control Flow
  • IF Statements
  • Function Scopes
  • Nested Functions

It’s fine. I’ll be fine.

I gotta say, ‘booleans’ is a very intriguing word and I am looking forward to being able to accurately work it into sentences, willy nilly.

I’m also glad I can feel confident that my first meeting with Python is guaranteed to be thorough. Moving beyond this course, I should already have a solid grasp on the most important concepts, making each subsequent topic easier to understand. It should also leave room for some serious creativity. That’s my favorite.

So It Begins

Now I sally forth into the unknown, aiming to return a stronger, savvier human with skills that are actually marketable.

I’m starting off with the first two videos (seems logical) entitled Python Overview and Python Versions. I’ll share my thoughts and insights next week.

If you have any words of wisdom and/or support to share, they would be deeply appreciated. Words of Affirmation is my love language.

Even better, if you’d like to take the course with me, hop in. We have strength in numbers. (That’s a Data Science pun…?)

This is going to be great!

Can Anybody Become a Data Scientist? was originally published in rmotr.com on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Phase 2 - Weekly Check-in 6

Planet Python - Mon, 2020-07-06 14:29

End of Week 5  - 06/07/2020

The first phase of GSoC ended and I'm very happy that I passed my first evaluation! At the beginning I was scared and unsure about a lot of things but I was able to make it through thanks to the very supportive mentors and fellow students who helped me! I always tried to give my best in any kind of task and I will continue giving my best in the upcoming weeks!

What did you do this week?

This week was a little slow but I did a lot of research on how we can get better accuracy with traditional Computer Vision techniques and what all processing operations are important to achieve this.

What is coming up next?

Next task is to discuss with my mentors and suggest on how the project will be going forward in this phase! I will be adding image processing operations if necessary and will discuss on the possibility of adding deep learning or OpenCV based models to get the bes results while doing different computer vision tasks.

Did you get stuck anywhere?

As I was busy making my road-map ahead for the second phase, so I didn't have any major places I got stuck except some errors related to OpenCV functions but I eventually fixed them or went into the depths of internet to find the solution. :P

Thank you for reading!

Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Blog Post #3

Planet Python - Mon, 2020-07-06 14:09

So this week I had my first evaluation. It went great also I received my stipend today so hurray.Extracting golang metadata with shell is frustrating not every go lang module have licenses or the copyright text in a set format. So writing a generic script is quiet challenging I researched on how go license works turns out they have a dedicated license parser as well as scripts which can request license from github. But we can't do that. This week was mostly debugging and restarting again. Now I am trying to work on go.sum file. Extracting names and versions is easy. So all the go modules live in ~/go/pkg/mod and then repo name. But we can't cd into module dir with its name because some does which have upper case letter in their name have different directory name. My exams are also nearing so I have to study for that too . :P

Categories: FLOSS Project Planets

Podcast.__init__: Pure Python Configuration Management With PyInfra

Planet Python - Mon, 2020-07-06 13:56
Building and managing servers is a challenging task. Configuration management tools provide a framework for handling the various tasks involved, but many of them require learning a specific syntax and toolchain. PyInfra is a configuration management framework that embraces the familiarity of Pure Python, allowing you to build your own integrations easily and package it all up using the same tools that you rely on for your applications. In this episode Nick Barrett explains why he built it, how it is implemented, and the ways that you can start using it today. He also shares his vision for the future of the project and you can get involved. If you are tired of writing mountains of YAML to set up your servers then give PyInfra a try today.Summary

Building and managing servers is a challenging task. Configuration management tools provide a framework for handling the various tasks involved, but many of them require learning a specific syntax and toolchain. PyInfra is a configuration management framework that embraces the familiarity of Pure Python, allowing you to build your own integrations easily and package it all up using the same tools that you rely on for your applications. In this episode Nick Barrett explains why he built it, how it is implemented, and the ways that you can start using it today. He also shares his vision for the future of the project and you can get involved. If you are tired of writing mountains of YAML to set up your servers then give PyInfra a try today.

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • This portion of Podcast.__init__ is brought to you by Datadog. Do you have an app in production that is slower than you like? Is its performance all over the place (sometimes fast, sometimes slow)? Do you know why? With Datadog, you will. You can troubleshoot your app’s performance with Datadog’s end-to-end tracing and in one click correlate those Python traces with related logs and metrics. Use their detailed flame graphs to identify bottlenecks and latency in that app of yours. Start tracking the performance of your apps with a free trial at datadog.com/pythonpodcast. If you sign up for a trial and install the agent, Datadog will send you a free t-shirt.
  • You listen to this show to learn and stay up to date with the ways that Python is being used, including the latest in machine learning and data analysis. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to pythonpodcast.com/conferences to check out the upcoming events being offered by our partners and get registered today!
  • Your host as usual is Tobias Macey and today I’m interviewing Nick Barrett about PyInfra, a pure Python framework for agentless configuration management
  • Introductions
  • How did you get introduced to Python?
  • Can you start by describing what PyInfra is and its origin story?
  • There are a number of options for configuration management of various levels of complexity and language options. What are the features of PyInfra that might lead someone to choose it over other systems?
  • What do you see as the major pain points in dealing with infrastructure today?
  • For someone who is using PyInfra to manage their servers, what is the workflow for building and testing deployments?
  • How do you handle enforcement of idempotency in the operations being performed?
  • Can you describe how PyInfra is implemented?
    • How has its design or focus evolved since you first began working on it?
    • What are some of the initial assumptions that you had at the outset which have been challenged or updated as it has grown?
  • The library of available operations seems to have a good baseline for deploying and managing services. What is involved in extending or adding operations to PyInfra?
  • With the focus of the project being on its use of pure Python and the easy integration of external libraries, how do you handle execution of python functions on remote hosts that requires external dependencies?
  • What are some of the other options for interfacing with or extending PyInfra?
  • What are some of the edge cases or points of confusion that users of PyInfra should be aware of?
  • What has been the community response from developers who first encounter and trial PyInfra?
  • What have you found to be the most interesting, unexpected, or challenging aspects of building and maintaining PyInfra?
  • When is PyInfra the wrong choice for managing infrastructure?
  • What do you have planned for the future of the project?
Keep In Touch Picks Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Categories: FLOSS Project Planets

PSF GSoC students blogs: GSoC Weekly Blog #3

Planet Python - Mon, 2020-07-06 13:23

This week, I worked on completing the delete and edit message functionality and writing tests for them. I was able to complete these tasks and later worked on refactoring some part of the new code. I also made some small tweaks and fixes to the existing code base of mscolab. I was planning on redesigning the version window UI but I realised the redesign would take a lot of work in the backend and changes in database as well. I would need to talk to my mentors on how to proceed on this.

This week, I have a list of small tweaks that I need to do in the work I have finished till now. I will be working on those. I would also be talking with my mentors to confirm some requirements for the next component of my project which is the offline editing feature.

This week writing tests took me the most time. With very limited documentation it's kind of tough to write tests for PyQt5 but I was able to write them in the end. 

Currently my PR is awaiting approval from the mentors and will be merged soon.

Categories: FLOSS Project Planets

Real Python: Object-Oriented Programming (OOP) in Python 3

Planet Python - Mon, 2020-07-06 12:21

Object-oriented programming (OOP) is a method of structuring a program by bundling related properties and behaviors into individual objects. In this tutorial, you’ll learn the basics of object-oriented programming in Python.

Conceptually, objects are like the components of a system. Think of a program as a factory assembly line of sorts. At each step of the assembly line a system component processes some material, ultimately transforming raw material into a finished product.

An object contains data, like the raw or preprocessed materials at each step on an assembly line, and behavior, like the action each assembly line component performs.

In this tutorial, you’ll learn how to:

  • Create a class, which is like a blueprint for creating an object
  • Use classes to create new objects
  • Model systems with class inheritance

Note: This tutorial is adapted from the chapter “Object-Oriented Programming (OOP)” in Python Basics: A Practical Introduction to Python 3.

The book uses Python’s built-in IDLE editor to create and edit Python files and interact with the Python shell, so you will see occasional references to IDLE throughout this tutorial. However, you should have no problems running the example code from the editor and environment of your choice.

Free Bonus: Click here to get access to a free Python OOP Cheat Sheet that points you to the best tutorials, videos, and books to learn more about Object-Oriented Programming with Python.

What Is Object-Oriented Programming in Python?

Object-oriented programming is a programming paradigm that provides a means of structuring programs so that properties and behaviors are bundled into individual objects.

For instance, an object could represent a person with properties like a name, age, and address and behaviors such as walking, talking, breathing, and running. Or it could represent an email with properties like a recipient list, subject, and body and behaviors like adding attachments and sending.

Put another way, object-oriented programming is an approach for modeling concrete, real-world things, like cars, as well as relations between things, like companies and employees, students and teachers, and so on. OOP models real-world entities as software objects that have some data associated with them and can perform certain functions.

Another common programming paradigm is procedural programming, which structures a program like a recipe in that it provides a set of steps, in the form of functions and code blocks, that flow sequentially in order to complete a task.

The key takeaway is that objects are at the center of object-oriented programming in Python, not only representing the data, as in procedural programming, but in the overall structure of the program as well.

Define a Class in Python

Primitive data structures—like numbers, strings, and lists—are designed to represent simple pieces of information, such as the cost of an apple, the name of a poem, or your favorite colors, respectively. What if you want to represent something more complex?

For example, let’s say you want to track employees in an organization. You need to store some basic information about each employee, such as their name, age, position, and the year they started working.

One way to do this is to represent each employee as a list:

kirk = ["James Kirk", 34, "Captain", 2265] spock = ["Spock", 35, "Science Officer", 2254] mccoy = ["Leonard McCoy", "Chief Medical Officer", 2266]

There are a number of issues with this approach.

First, it can make larger code files more difficult to manage. If you reference kirk[0] several lines away from where the kirk list is declared, will you remember that the element with index 0 is the employee’s name?

Second, it can introduce errors if not every employee has the same number of elements in the list. In the mccoy list above, the age is missing, so mccoy[1] will return "Chief Medical Officer" instead of Dr. McCoy’s age.

A great way to make this type of code more manageable and more maintainable is to use classes.

Classes vs Instances

Classes are used to create user-defined data structures. Classes define functions called methods, which identify the behaviors and actions that an object created from the class can perform with its data.

In this tutorial, you’ll create a Dog class that stores some information about the characteristics and behaviors that an individual dog can have.

Read the full article at https://realpython.com/python3-object-oriented-programming/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check In - 5

Planet Python - Mon, 2020-07-06 12:16
What did I do till now?

I was going through Twisted's implementation of HTTP/1.x and how they are handling multiple requests. I was focusing on their implementation of HTTPConnectionPool which is responsible for establing a new connection whenever required & using an existing connection (in cache). 

Besides this, I did the requested changes on the HTTP/2 Client implementation. 

What's coming up next?

Next week I plan to finish coding H2ConnectionPool and its integration with HTTP2ClientProtocol. Along with the integration I plan to write unit tests as well.

Did I get stuck anywhere?

No. I mostly read lots of documentation & Twisted codebase throughout this week and fixed the bugs found in HTTP/2 Client implementation. 

Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check-in #6

Planet Python - Mon, 2020-07-06 11:48
What did I do this week?

I discussed on adding NLP operations with my mentors in a weekly meeting. I worked on two operations to begin and created a PR. 

What's next?

I will be working on using these operations to train a TensorFlow model and do predictions using it.

Did I get stuck somewhere?

Yes, I got stuck in writing operations but it was resolved by discussion with fellow mates.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: Rcpp 1.0.5: Several Updates

Planet Debian - Mon, 2020-07-06 11:43

Right on the heels of the news of 2000 CRAN packages using Rcpp (and also hitting 12.5 of CRAN package, or one in eight), we are happy to announce release 1.0.5 of Rcpp. Since the ten-year anniversary and the 1.0.0 release release in November 2018, we have been sticking to a four-month release cycle. The last release has, however, left us with a particularly bad taste due to some rather peculiar interactions with a very small (but ever so vocal) portion of the user base. So going forward, we will change two things. First off, we reiterate that we have already made rolling releases. Each minor snapshot of the main git branch gets a point releases. Between release 1.0.4 and this 1.0.5 release, there were in fact twelve of those. Each and every one of these was made available via the drat repo, and we will continue to do so going forward. Releases to CRAN, however, are real work. If they then end up with as much nonsense as the last release 1.0.4, we think it is appropriate to slow things down some more so we intend to now switch to a six-months cycle. As mentioned, interim releases are always just one install.packages() call with a properly set repos argument away.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2002 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 203 in BioConductor. And per the (partial) logs of CRAN downloads, we are running steady at around one millions downloads per month.

This release features again a number of different pull requests by different contributors covering the full range of API improvements, attributes enhancements, changes to Sugar and helper functions, extended documentation as well as continuous integration deplayment. See the list below for details.

Changes in Rcpp patch release version 1.0.5 (2020-07-01)
  • Changes in Rcpp API:

    • The exception handler code in #1043 was updated to ensure proper include behavior (Kevin in #1047 fixing #1046).

    • A missing Rcpp_list6 definition was added to support R 3.3.* builds (Davis Vaughan in #1049 fixing #1048).

    • Missing Rcpp_list{2,3,4,5} definition were added to the Rcpp namespace (Dirk in #1054 fixing #1053).

    • A further updated corrected the header include and provided a missing else branch (Mattias Ellert in #1055).

    • Two more assignments are protected with Rcpp::Shield (Dirk in #1059).

    • One call to abs is now properly namespaced with std:: (Uwe Korn in #1069).

    • String object memory preservation was corrected/simplified (Kevin in #1082).

  • Changes in Rcpp Attributes:

    • Empty strings are not passed to R CMD SHLIB which was seen with R 4.0.0 on Windows (Kevin in #1062 fixing #1061).

    • The short_file_name() helper function is safer with respect to temporaries (Kevin in #1067 fixing #1066, and #1071 fixing #1070).

  • Changes in Rcpp Sugar:

    • Two sample() objects are now standard vectors and not R_alloc created (Dirk in #1075 fixing #1074).
  • Changes in Rcpp support functions:

    • Rcpp.package.skeleton() adjusts for a (documented) change in R 4.0.0 (Dirk in #1088 fixing #1087).
  • Changes in Rcpp Documentation:

    • The pdf file of the earlier introduction is again typeset with bibliographic information (Dirk).

    • A new vignette describing how to package C++ libraries has been added (Dirk in #1078 fixing #1077).

  • Changes in Rcpp Deployment:

    • Travis CI unit tests now run a matrix over the versions of R also tested at CRAN (rel/dev/oldrel/oldoldrel), and coverage runs in parallel for a net speed-up (Dirk in #1056 and #1057).

    • The exceptions test is now partially skipped on Solaris as it already is on Windows (Dirk in #1065).

    • The default CI runner was upgraded to R 4.0.0 (Dirk).

    • The CI matrix spans R 3.5, 3.6, r-release and r-devel (Dirk).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2455 previous questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

PSF GSoC students blogs: GSoC 2020 Blog Post (#3)

Planet Python - Mon, 2020-07-06 10:52

Hello all!

It has been over a month since the official coding period began and the first evaluation just got over. Just received a message saying 900$ have been deposited in my account. It is an amazing feeling and I can't describe how happy I am right now.

The product finally got a name last week. We chose 'User Stories' as the name of this product. This was followed by changing the code to say 'User Stories' instead of 'Feature Requests'. Every submission will basically be a 'story' and not a 'request'. Over the past two weeks I have been working with creating and viewing a new story. Me and my partner have been working on multiple pages that will fetch content from the backend in a particular format and display it to the user. Also, there is one page called 'New Story' page will be used to create the new stories. This page is now complete with all the backend connectivity and we have been testing it. Apart from this, based on discussions with the mentor I worked on creating wireframes for a few more pages. We decided to add a 'My Profile' page which will house all the information about the user (Profile picture, username, email, about etc). There will be two types of user profiles. One that the user opens up and can change the content that is being shown and the other is when some other user opens up your profile where they can see the information about you as well as your 'stories'. I worked on creating designs for this and after a few rounds of discussions they were approved. I have also been working on a 'My Stories' page but that is still under discussion so I'll probably talk about it next week.

I did not face any issues this time around. The only places I got stuck were related to the backend connectivity and a few calls with my partner resolved all of them. 
Like I said, I'll work on the 'My Stories' page this week and complete all the backend logic for the other pages.

Categories: FLOSS Project Planets

Drupal Atlanta Medium Publication: Attention All Event Organizers — Call for Board Nominations — Deadline Today

Planet Drupal - Mon, 2020-07-06 09:53
Attention All Event Organizers — Call for Board Nominations — Deadline Today

It feels like a lifetime ago that the event organizers’ request to become an official working group was approved by the Drupal Association at DrupalCon Amsterdam. Since then, 2020 has been a year that no-one will forget-from a global virus to social justice demonstrations-the world as we know it has been forever changed.

Lessons We Are Learning in 2020

So far in 2020, we have learned some valuable lessons that we think will help us be a better working group moving forward.

Organizing Events is Hard. Organizing volunteer-led events is difficult already, let alone during complete uncertainty. Many event organizers have had to make very difficult but swift decisions by either canceling or trying to pivot to a virtual conference format.

Finding the Right Time is Hard. Organizing a global group of volunteer event organizers is also hard. As someone who has had little time on international teams, I admittedly thought of finding a meeting time a breeze. I was completely wrong.

Global Representation is Hard. One of our top priorities was to have global representation to help foster growth and collaboration around the world but unfortunately due to either the meeting times or not enough focused marketing on international event organizers the participation was just not where the board felt it should be.

Changes We are Making

After a few emails and some friendly debates, the board looked for opportunities for change that can help solve some of the lessons we have learned.

Alternating Meeting Times in UTC Format. To help foster more international participation, all scheduled meetings will alternate times all marketed and posted in the Coordinated Universal Time (UTC) format. Public meetings will now be at 12:00 pm UTC and 12:00 am UTC.

Increase Board Membership to 9. The group decided to expand the board members to 9. We are highly encouraging organizers from around the world to submit their names for interest to increase our global representation.

Maintain and Recruit Advisory Board Members. Succession planning is critical for any operation, and our advisory board provides more flexible commitment in participation which we hope will be our number one resource for new members down the road.

Board Members Nominations. In addition to expanding the number of board seats, Suzanne Dergacheva from DrupalNorth (Canada) and Matthew Saunders (DrupalCamp Colorado) have accepted their nominations from advisors to board members.

Current Board Members
  • Camilo Bravo (cambraca) — DrupalCamp Quito — Ecuador / Hungary
  • Baddý Sonja Breidert (baddysonja) — DrupalCamp Iceland, Germany, Europe, Splash Awards — Europe
  • Kaleem Clarkson (kclarkson) -DrupalCamp Atlanta — Atlanta, GA, USA
  • Suzanne Dergacheva (pixelite) — DrupalNorth — Montreal, QC CANADA
  • Leslie Glynn (leslieg) Design 4 Drupal Boston, NEDCamp — Boston MA
  • Matthew Saunders (MatthewS) — Drupalcamp Colorado — Denver, CO, USA
  • Avi Schwab (froboy) — MidCamp, Midwest Open Source Alliance — Chicago, IL, USA
Things We are Working On

There are so many things that all of us organizers would like to get working, but one of our goals has been to identify our top priorities.

Event Organizer Support. We are here to help. When volunteer organizers need guidance navigating event challenges, there are various channels to get help.

Drupal Community Events Database. In collaboration with the Drupal Association, the EOWG has been working on putting together a new and improved event website database that will help market and collect valuable data for organizers around the world.
Submit your event today: https://www.drupal.org/community/events

Drupal Event Website Starter kit. To help organizers get events up and running quickly, an event website starter kit was identified as a valuable resource. Using the awesome work contributed by the Drupal Europe team, JD Leonard from DrupalNYC has taken the lead in updating the codebase. It is our hope more event organizers will help guide a collaborative effort and continue building an event starter kit that organizers can use.

Join the Event Organizer Slack here and Join #event-website-starterkit

Seeking Event Organizers Board Members and Advisory Committee Members — Submit Your Nomination Today

The Drupal Event Organizers Working Group is seeking nominations for Board Members and Advisory Committee Members. Anyone involved in organizing an existing or future community event is welcome to nominate.

EOWG Board Members. We are currently looking for nominations to fill two (2) board seats. For these seats, we are looking for diverse candidates that are event organizers from outside of North America. Interested organizers are encouraged to nominate themselves.

EOWG Advisory Committee. We are looking for advisory committee members. The advisory committee is designed to allow individuals to participate who may not have a consistent availability to meet or who are interested in joining the board in the future.

Nomination Selection Process: All remaining seats/positions will be selected by a majority vote of the EOWG board of directors.

Submit Your Nomination: To submit your nomination please visit the Issue below and submit your name, event name, country, territory/state, and a short reason why you would like to participate.

Issue: https://www.drupal.org/project/event_organizers/issues/3152319

Nomination Deadline: Monday, July 6th, 11:59 pm UTC

Originally published at https://www.drupal.org on June 17, 2020.

Attention All Event Organizers — Call for Board Nominations — Deadline Today was originally published in Drupal Atlanta on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: FLOSS Project Planets

Codementor: Tips For Python 2020

Planet Python - Mon, 2020-07-06 09:17
In this article you are going to read about new tips in python
Categories: FLOSS Project Planets

Kalamuna Blog: 5 Tips To Get Top SEO Results

Planet Drupal - Mon, 2020-07-06 08:38
5 Tips To Get Top SEO Results Jaida Regan Mon, 07/06/2020 - 05:38

Every marketing professional knows that being at the top of search engine results is important and that SEO will help to get them there. You may have also heard that SEO can be challenging to learn, but the truth is, SEO doesn't have to be difficult, and sometimes it can be pretty fun!

These 5 basic SEO techniques are easy to implement and will have you on your way to the top of search engine results:

Categories Analytics Strategy Author Jason Blanda
Categories: FLOSS Project Planets

Stack Abuse: 'is' vs '==' in Python - Object Comparison

Planet Python - Mon, 2020-07-06 08:30
'is' vs '==' in Python

Python has two very similar operators for checking whether two objects are equal. These two operators are is and ==.

They are usually confused with one another because with simple data types, like ints and strings (which many people start learning Python with) they seem to do the same thing:

x = 5 s = "example" print("x == 5: " + str(x == 5)) print("x is 5: " + str(x is 5)) print("s == 'example': " + str(s == "example")) print("s is 'example': " + str(s is "example"))

Running this code will result in:

x == 5: True x is 5: True s == 'example': True s is 'example': True

This shows that == and is return the same value (True) in these cases. However, if you tried to do this with a more complicated structure:

some_list = [1] print("some_list == [1]: " + str(some_list == [1])) print("some_list is [1]: " + str(some_list is [1]))

This would result in:

some_list == [1]: True some_list is [1]: False

Here it becomes obvious that these operators aren't the same.

The difference comes from the fact that is checks for identity (of objects), while == checks for equality (of value).

Here's another example that might clarify the difference between these two operators:

some_list1 = [1] some_list2 = [1] some_list3 = some_list1 print("some_list1 == some_list2: " + str(some_list1 == some_list2)) print("some_list1 is some_list2: " + str(some_list1 is some_list2)) print("some_list1 == some_list3: " + str(some_list1 == some_list3)) print("some_list1 is some_list3: " + str(some_list1 is some_list3))

This results in:

some_list1 == some_list2: True some_list1 is some_list2: False some_list1 == some_list3: True some_list1 is some_list3: True

As we can see, some_list1 is equal to some_list2 by value (they're both equal to [1]]), but they are not identical, meaning they aren't the same object, even though they have equal values.

However, some_list1 is both equal and identical to some_list3 since they reference the same object in memory.

Mutable vs Immutable Data Types

While this part of the problem now might be clear (when we have named variables), another question might pop up:

How come is and == behave the same with unnamed int and string values (like 5 and "example") but don't behave the same with unnamed lists (like [1])?

There are two kinds of data types in Python - mutable and immutable.

  • Mutable data types are data types which you can "change" over time
  • Immutable data types stay the same (have the same memory location, which is what is checks) once they are created

Mutable data types are: list, dictionary, set, and user-defined classes.

Immutable data types are: int, float, decimal, bool, string, tuple, and range.

Like many other languages Python handles immutable data types differently than mutable types, i.e. saves them in memory only once.

So every 5 you use in your code is the exact same 5 you use in other places in your code, and the same goes for string literals you use.

If you use the string "example" once, every other time you use "example" it will be the exact same object. See this Note for further clarification.

We will be using a Python function called id() which prints out a unique identifier for each object, to take a closer look at this mutability concept in action:

s = "example" print("Id of s: " + str(id(s))) print("Id of the String 'example': " + str(id("example")) + " (note that it's the same as the variable s)") print("s is 'example': " + str(s is "example")) print("Change s to something else, then back to 'example'.") s = "something else" s = "example" print("Id of s: " + str(id(s))) print("s is 'example': " + str(s is "example")) print() list1 = [1] list2 = list1 print("Id of list1: " + str(id(list1))) print("Id of list2: " + str(id(list2))) print("Id of [1]: " + str(id([1])) + " (note that it's not the same as list1!)") print("list1 == list2: " + str(list1 == list2)) print("list1 is list2: " + str(list1 is list2)) print("Change list1 to something else, then back to the original ([1]) value.") list1 = [2] list1 = [1] print("Id of list1: " + str(id(list1))) print("list1 == list2: " + str(list1 == list2)) print("list1 is list2: " + str(list1 is list2))

This outputs:

Id of s: 22531456 Id of the String 'example': 22531456 (note that it's the same as the variable s) s is 'example': True Change s to something else, then back to 'example'. Id of s: 22531456 s is 'example': True Id of list1: 22103504 Id of list2: 22103504 Id of [1]: 22104664 (note that it's not the same as list1!) list1 == list2: True list1 is list2: True Change list1 to something else, then back to the original ([1]) value. Id of list1: 22591368 list1 == list2: True list1 is list2: False

We can see that in the first part of the example, s returned to the exact same "example" object it was assigned to at the beginning, even if we change the value of s in the meantime.

However, list does not return the same object whose value is [1], but a whole new object is created, even if it has the same value as the first [1].

If you run the code above, you are likely to get different IDs for the objects, but the equalities will be the same.

When are 'is' and '==' Used Respectively?

The is operator is most commonly used when we want to compare the object to None, and restricting its usage to this particular scenario is generally advised unless you really (and I do mean really) want to check whether two objects are identical.

Besides, is is generally faster than the == operator because it simply checks for integer equality of the memory address.

Important note: The only situation when is works exactly as might be expected is with singleton classes/objects (like None). Even with immutable objects, there are situations where is does not work as expected.

For example, for large string objects generated by some code logic, or large ints, is can (and will) behave unpredictably. Unless you go through the effort of interning (i.e. making absolutely sure that only one copy of a string/int/etc. exists), all the various immutable objects you plan to use, is will be unpredictable.

The bottom line is: use == in 99% of cases.

If two objects are identical they are also equal, and that the opposite isn't necessarily true.

Overriding '==' and '!=' Operators

Operators != and is not behave in the same way as their "positive" counterparts do. Namely, != returns True if objects don't have the same value, while is not returns True if the objects are not stored in the same memory address.

One other difference between these two operators is that you can override the behavior of ==/!= for a custom class, while you can't override the behavior of is.

If you implement a custom __eq()__ method in your class, you can change how the ==/!= operators behave:

class TestingEQ: def __init__(self, n): self.n = n # using the '==' to check whether both numbers # are even, or if both numbers are odd def __eq__(self, other): if (self.n % 2 == 0 and other % 2 == 0): return True else: return False print(5 == TestingEQ(1)) print(2 == TestingEQ(10)) print(1 != TestingEQ(2))

This results in:

False True True Conclusion

In short, ==/!= check for equality (by value) and is/is not check whether two objects are identical, i.e. checks their memory addresses.

However, avoid using is unless you know exactly what you're doing, or when dealing with singleton objects like None, since it can behave unpredictably.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Unleash content with Strapi and GraphQL in the feature request system in GSOC’20

Planet Python - Mon, 2020-07-06 08:12

You definitely received exactly what you wanted and you enjoyed using it. Now you just want to know how this actually works under the hood. Is it real magic? Did god just shaped it according to your needs and sent it down from heaven? Come with me and I’ll answer all your GraphQL queries.

What did I do this week?

It took me some time to realize that GraphQL is not another programming language. It is a specification or a technical standard. This mean it can have several implementations and variations. GraphQL.js is the JavaScript reference implementation. Legends like Strapi implemented a fully customizable GraphQL plugin which creates a new endpoint /graphql and the GraphiQL interface. The plugin adds a new layer to easily request the API using GraphQL, without breaking the existing endpoints.

GraphQL secures all queries by default and the plugin plays well with the users and permissions plugin. When we are making a query with GraphQL we are hitting the same controller’s actions as we were doing with the REST endpoints.

I utilized the introspection system of GraphQL to fetch all categories available to our user while making a new story.

I also used a mutation to create a new user story which takes in all data entered by the user as parameter. I pass all my GraphQL queries through a set of custom policies before they hit my custom authentication controllers in the customized Strapi back end. This completed the feature to create a new user story.

On the server side I created a new model for the comments on the stories which contains 3 fields. The first is a rich text field to write the comment and the remaining two are relation fields with user and feature requests models. This means I also had to add a relation field to the feature request model to connect it with my comments model.

The next big feature of the week was to add functionality to display all stories on the home page according to their current status. The shadow CRUD feature of Strapi GraphQL plugin automatically generates the type definition, queries, mutations and resolvers based on our models. The feature also lets us make complex query with many arguments such as limit, sort, start and where. This helped me to implement a GraphQL query to fetch the required details of all user stories and sort them according to the highest number of votes and most recently created stories. I used the query response to display the title, description and the no. of votes and comments for all stories on the home page.

All these features for the week helped me to catch up with GraphQL really fast. It was completely new to me and I was feeling stuck just a week before. Now by the end of the week I am happy that I am able to understand how it works so efficiently deep down the layers.

What is coming up next?

I will work on displaying all information about each user story on a separate page. This requires a query to fetch all story details by the story id. The date of publishing and author details will be available on this page as well.

The system needs an edit feature which may allow our beloved users to add information to their story. They will not be allowed to edit the existing description though.

Did I get stuck anywhere?

Sometimes rare bugs can be difficult to catch and you may feel stuck while trying to solve them. This week I came across something strange.

All my code and functionality was running successfully on my development environment. After getting merged, the Gitlab CI deployed it to our staging servers and the whole application crashed, both the client and server side.

I finally solved all bugs and got to learn that Strapi stores all the roles and permissions that we assign to our users in the database. In this case our development and staging servers are using different database. Simple :)

We just exported all our models data from the testing database to the one used by our staging server.

Another bug was to specify the exact cors origin which can access our server as I used withCredentials : true in my Axios configuration to store the cookies in the browser and send them back in future requests.

This ended the week with the first evaluations in which I received an awesome feedback from my mentors and passed with flying colors. I thanked them for all their support and guidance. Looking forward to a great journey ahead :)

Categories: FLOSS Project Planets