Feeds

Windows Store Monthly Statistics

Planet KDE - Tue, 2020-08-11 14:40

Here are the number of acquisitions for the last 30 days (roughly equal to the number of installations, not mere downloads) for our applications:

A nice stream of new users for our software on the Windows platform.

Okular now has overtaken Kate in the latest monthly acquisitions :P

For completeness, overall acquisitions since the stuff is in the store:

If you want to help to bring more stuff KDE develops on Windows, we have some meta Phabricator task were you can show up and tell for which parts you want to do work on.

A guide how to submit stuff later can be found on our blog.

Thanks to all the people that help out with submissions & updates & fixes!

If you encounter issues on Windows and are a developer that wants to help out, all KDE projects really appreciate patches for Windows related issues.

Just contact the developer team of the corresponding application and help us to make the experience better on any operating system.

Categories: FLOSS Project Planets

Python Insider: Python 3.9.0rc1 is now available

Planet Python - Tue, 2020-08-11 13:43

Python 3.9.0 is almost ready. This release, 3.9.0rc1, is the penultimate release preview. You can get it here:

https://www.python.org/downloads/release/python-390rc1/

Entering the release candidate phase, only reviewed code changes which are clear bug fixes are allowed between this release candidate and the final release. The second candidate and the last planned release preview is currently planned for 2020-09-14.

Please keep in mind that this is a preview release and its use is not recommended for production environments.

Calls to actionCore developers: all eyes on the docs now
  • Are all your changes properly documented?
  • Did you notice other changes you know of to have insufficient documentation?
Community members

We strongly encourage maintainers of third-party Python projects to prepare their projects for 3.9 compatibility during this phase. As always, report any issues to the Python bug tracker.

Installer news

This is the first version of Python to default to the 64-bit installer on Windows. The installer now also actively disallows installation on Windows 7. Python 3.9 is incompatible with this unsupported version of Windows.

Major new features of the 3.9 series, compared to 3.8

Some of the new major new features and changes in Python 3.9 are:

  • PEP 584, Union Operators in  dict
  • PEP 585, Type Hinting Generics In Standard Collections
  • PEP 593, Flexible function and variable annotations
  • PEP 602, Python adopts a stable annual release cadence
  • PEP 615, Support for the IANA Time Zone Database in the Standard Library
  • PEP 616, String methods to remove prefixes and suffixes
  • PEP 617, New PEG parser for CPython
  • BPO 38379, garbage collection does not block on resurrected objects;
  • BPO 38692, os.pidfd_open added that allows process management without races and signals;
  • BPO 39926, Unicode support updated to version 13.0.0;
  • BPO 1635741, when Python is initialized multiple times in the same process, it does not leak memory anymore;
  • A number of Python builtins (range, tuple, set, frozenset, list, dict) are now sped up using PEP 590 vectorcall;
  • A number of Python modules (_abc, audioop, _bz2, _codecs, _contextvars, _crypt, _functools, _json, _locale, operator, resource, time, _weakref) now use multiphase initialization as defined by PEP 489;
  • A number of standard library modules (audioop, ast, grp, _hashlib, pwd, _posixsubprocess, random, select, struct, termios, zlib) are now using the stable ABI defined by PEP 384.
Categories: FLOSS Project Planets

Evolving Web: How to Embed Facebook Videos with the Drupal Media Module

Planet Drupal - Tue, 2020-08-11 13:43

Since the start of lockdown in March, you’ve probably seen way more live streams on social media than you ever had before. These are taking places on platforms like YouTube, Instagram, and Facebook Live.

We recently got a question at a training from an attendee whose organization has been doing more live streams on Facebook. They also use Drupal’s built-in Media module and were wondering how they could embed their recorded live streams from Facebook onto their Drupal site. In today's tutorial, I outline two different methods for pulling this off.

Method 1: Basic HTML Embed

This method isn’t integrated with Drupal’s Media module. The video won’t appear in your Media Library and can’t be referenced and re-used through your site.

However, if you’re looking for a quick solution and don’t need all the magic that Media offers, this might be a good route.

  1. Go to the video on Facebook
  2. In the top-right of the page, click the three dots to reveal the drop-down and click Embed
  3. Copy the HTML code
  4. Go to your WYSIWYG editor field, click View Source, then paste the HTML code.
Method 2: Drupal Media Module and Remote Video

Out of the box, Drupal ships with the Media module, which has a few different Media Types, including:

  • Audio
  • Document
  • Image
  • Remote video
  • Video

The difference between Remote video and Video is that the former are videos hosted on third-party platforms, like YouTube and Vimeo. In fact, out of the box, Drupal Media can have a URL from either of those two sites and be playable. Unfortunately, we have to do some work to get other platforms to be playable.

I’ll be showing a module called Media Entity Facebook to get Facebook videos to be embedded. But first: make sure you’ve already enabled the built-in Media module.

OK, let’s go.

Note: the embedded YouTube tutorial was recorded on July 13, 2020. On July 17th, a release of the Media Entity Facebook module was released and this is the recommended solution. You can skip to the 9:05 mark to jump straight to the process of configuring this module.

1. Get the Media Entity Facebook module for Drupal

First, download and enable the Media Entity Facebook module.

2. Create a new Media Type

After enabling, we can create a new Media Type that uses Facebook as a Media source:

  • Go to /admin/structure/media
  • Click Add media type
  • Give it a name and a description, then choose Facebook from the Media Source drop-down
  • Click Save

3. Create or modify a media field to handle the Facebook video media type

Now that we have a new Media Type, we can create a field (or modify an existing field) on a content type to use it.

  • Add a new field to your content type 
  • Choose Media from the Field Type drop-down and name your field
  • In the list of Media Types, choose the new Facebook media type that you created in the previous step.

4. Change the input widget

After adding a new Media field, the default method of adding the content is via the autocomplete widget. This is silly, and we are going to change this widget to something more sensible.

Go back to editing your content type and click on the Manage form display tab. Find the field field that uses your Facebook video embed and change the widget to use Inline entity form - Simple. This will let your users simply paste the URL of the Facebook video.

Now, when you add content, you can give your media entity a name and paste in just the URL of the Facebook video (not the embed code).

As for your content type, by default it will display several fields you may not need, such as Authored by, Authored on, URL alias, and Published. In the Manage form display menu of your media type, you can drag these fields to the Disabled section.

5. Change the display of field

After saving our content, by default this module just shows the Facebook URL and not the embedded video. To get it to display as an embedded video, we have to change the display type of the Media entity itself:

  • Go to /admin/structure/media/manage/name_of_your_media_type/display
  • Change the format of your field to Facebook embed
  • Click Save

And now, your content should have the Facebook video embedded.

Wrap-Up

The video tutorial and the directions above showed the Media Entity Facebook module being used on a content type. That said, nothing prevents you from creating a new Media Type and using it in Paragraphs.

In fact, that’s exactly how I use it on one of my personal sites. I record live streams and at the end of the week, I post all of the videos recorded into a blog post which is based in Paragraphs. Here you can learn all about the Moon, from embedded Facebook videos.

+ more awesome articles by Evolving Web
Categories: FLOSS Project Planets

Jonathan Carter: GameMode in Debian

Planet Debian - Tue, 2020-08-11 11:35

What is GameMode, what does it do?

About two years ago, I ran into some bugs running a game on Debian, so installed Windows 10 on a spare computer and ran it on there. I learned that when you launch a game in Windows 10, it automatically disables notifications, screensaver, reduces power saving measures and gives the game maximum priority. I thought “Oh, that’s actually quite nice, but we probably won’t see that kind of integration on Linux any time soon”. The very next week, I read the initial announcement of GameMode, a tool from Feral Interactive that does a bunch of tricks to maximise performance for games running on Linux.

When GameMode is invoked it:

  • Sets the kernel performance governor from ‘powersave’ to ‘performance’
  • Provides I/O priority to the game process
  • Optionally sets nice value to the game process
  • Inhibits the screensaver
  • Tweak the kernel scheduler to enable soft real-time capabilities (handled by the MuQSS kernel scheduler, if available in your kernel)
  • Sets GPU performance mode (NVIDIA and AMD)
  • Attempts GPU overclocking (on supported NVIDIA cards)
  • Runs custom pre/post run scripts. You might want to run a script to disable your ethereum mining or suspend VMs when you start a game and resume it all once you quit.

How GameMode is invoked

Some newer games (proprietary games like “Rise of the Tomb Raider”, “Total War Saga: Thrones of Britannia”, “Total War: WARHAMMER II”, “DiRT 4” and “Total War: Three Kingdoms”) will automatically invoke GameMode if it’s installed. For games that don’t, you can manually evoke it using the gamemoderun command.

Lutris is a tool that makes it easy to install and run games on Linux, and it also integrates with GameMode. (Lutris is currently being packaged for Debian, hopefully it will make it in on time for Bullseye).

Screenshot of Lutris, a tool that makes it easy to install your non-Linux games, which also integrates with GameMode.

GameMode in Debian

The latest GameMode is packaged in Debian (Stephan Lachnit and I maintain it in the Debian Games Team) and it’s also available for Debian 10 (Buster) via buster-backports. All you need to do to get up and running with GameMode is to install the ‘gamemode’ package.

GameMode in Debian supports 64 bit and 32 bit mode, so running it with older games (and many proprietary games) still work. Some distributions (like Arch Linux), have dropped 32 bit support, so 32 bit games on such systems lose any kind of integration with GameMode even if you can get those games running via other wrappers on such systems.

We also include a binary called ‘gamemode-simulate-game’ (installed under /usr/games/). This is a minimalistic program that will invoke gamemode automatically for 10 seconds and then exit without an error if it was successful. Its source code might be useful if you’d like to add GameMode support to your game, or patch a game in Debian to automatically invoke it.

In Debian we install Gamemode’s example config file to /etc/gamemode.ini where a user can customise their system-wide preferences, or alternatively they can place a copy of that in ~/.gamemode.ini with their personal preferences. In this config file, you can also choose to explicitly allow or deny games.

GameMode might also be useful for many pieces of software that aren’t games. I haven’t done any benchmarks on such software yet, but it might be great for users who use CAD programs or use a combination of their CPU/GPU to crunch a large amount of data.

I’ve also packaged an extension for GNOME called gamemode-extension. The Debian package is called ‘gnome-shell-extension-gamemode’. You’ll need to enable it using gnome-tweaks after installation, it will then display a green controller in your notification area whenever GameMode is active. It’s only in testing/bullseye since it relies on a newer gnome-shell than what’s available in buster.

Running gamemode-simulate-game, with the shell extension showing that it’s activated in the top left corner.
Categories: FLOSS Project Planets

PSF GSoC students blogs: Week 10 Check-in

Planet Python - Tue, 2020-08-11 11:16
What did you do this week?

This week I started a new PR that adds multimethods for statistical functions. The multimethods added are the following:

Order statistics

  • percentile
  • nanpercentile
  • quantile
  • nanquantile

Averages and variances

  • median
  • average
  • mean
  • nanmedian
  • nanmean
  • nanstd
  • nanvar

Correlating

  • corrcoef
  • correlate
  • cov

Histograms

  • histogram
  • histogram2d
  • histogramdd
  • bincount
  • histogram_bin_edges
  • digitize

As of now these new additions have the essential parts of a multimethod, both an argument dispatcher and replacer but are missing default implementations. I've also modified some of the argument replacers to support the out keyword making them more general purpose thus removing the need for some of the other more specific argument replacers.

What is coming up next?

I'll continue the PR started this week by working on default implementations for the simpler multimethods like median and mean. I will also start a new PR that adds multimethods for NumPy's random module that picks up on an older PR left off by one of my mentors. I've been wanting to work on these multimethods for a while but wasn't able to because of other project commitments. After discussing with my mentors we decided to change the work expected for the following week from the JAX backend implementation to the random module since this has a higher priority.

Did you get stuck anywhere?

Yes, working on the default implementation for median. Although this might be one of the easiest defaults from the multimethods in the current PR it has proven to be a bit of a challenge. The idea is to transverse the array along the given axes and apply a reduction function. Because of the array manipulations necessary to accomplish this and since I can't use item assignment the default is being more complicated than initial thought. More recently one of my mentors provided a general template for implementing it which might help me unblock. If I can do this other reduction multimethods' defaults should easily follow.

Categories: FLOSS Project Planets

Mike Gabriel: No Debian LTS Work in July 2020

Planet Debian - Tue, 2020-08-11 11:16

In July 2020, I was originally assigned 8h of work on Debian LTS as a paid contributor, but holiday season overwhelmed me and I did not do any LTS work, at all.

The assigned hours from July I have taken with me into August 2020.

light+love,
Mike

Categories: FLOSS Project Planets

Dataquest: How to Learn Python for Data Science In 5 Steps

Planet Python - Tue, 2020-08-11 10:57
Why Learn Python For Data Science?

Before we explore how to learn Python for data science, we should briefly answer why you should learn Python in the first place.

In short, understanding Python is one of the valuable skills needed for a data science career.

Though it hasn’t always been, Python is the programming language of choice for data science. Here’s a brief history:

Data science experts expect this trend to continue with increasing development in the Python ecosystem. And while your journey to learn Python programming may be just beginning, it’s nice to know that employment opportunities are abundant (and growing) as well.

According to Indeed, the average salary for a Data Scientist is $121,583.

The good news? That number is only expected to increase, as demand for data scientists is expected to keep growing. In 2020, there are three times as many job postings in data science as job searches for data science, according to Quanthub. That means the demand for data scientitsts is vastly outstripping the supply.

So, the future is bright for data science, and Python is just one piece of the proverbial pie. Fortunately, learning Python and other programming fundamentals is as attainable as ever. We’ll show you how in five simple steps.

But remember – just because the steps are simple doesn’t mean you won’t have to put in the work. If you apply yourself and dedicate meaningful time to learning Python, you have the potential to not only pick up a new skill, but potentially bring your career to a new level.

How to Learn Python for Data Science


Click to View Our How to Learn Python Infographic


First, you’ll want to find the right course to help you learn Python programming. Dataquest’s courses are specifically designed for you to learn Python for data science at your own pace, challenging you to write real code and use real data in our interactive, in-browser interface.

In addition to learning Python in a course setting, your journey to becoming a data scientist should also include soft skills. Plus, there are some complimentary technical skills we recommend you learn along the way.


Step 1: Learn Python Fundamentals

Everyone starts somewhere. This first step is where you’ll learn Python programming basics. You’ll also want an introduction to data science.

One of the important tools you should start using early in your journey is Jupyter Notebook, which comes prepackaged with Python libraries to help you learn these two things.

Kickstart your learning by: Joining a community

By joining a community, you’ll put yourself around like-minded people and increase your opportunities for employment. According to the Society for Human Resource Management, employee referrals account for 30% of all hires.

Create a Kaggle account, join a local Meetup group, and participate in Dataquest’s learner community with current students and alums.

Related skills: Try the Command Line Interface

The Command Line Interface (CLI) lets you run scripts more quickly, allowing you to test programs faster and work with more data.


Step 2: Practice Mini Python Projects

We truly believe in hands-on learning. You may be surprised by how soon you’ll be ready to build small Python projects. We've already put together a great guide to Python projects for beginners, which includes ideas like:

  • Tracking and Analyzing Your Personal Amazon.com Spending Habits — A fun project that'll help you practice Python and pandas basics while also giving you some real insight into your personal finance.
  • Analyze Data from a Survey — Find public survey data or use survey data from your own work in this beginner project that'll teach you to drill down into answers to mine insights.
  • Try one of our Guided Projects — Interactive Python projects for every skill level that use real data and offer guidance while still challenging you to apply your skills in new ways.

But that's just the tip of the iceberg, really. You can try programming things like calculators for an online game, or a program that fetches the weather from Google in your city. You can also build simple games and apps to help you familiarize yourself with working with Python.

Building mini projects like these will help you learn Python. programming projects like these are standard for all languages, and a great way to solidify your understanding of the basics.

You should start to build your experience with APIs and begin web scraping. Beyond helping you learn Python programming, web scraping will be useful for you in gathering data later.

Kickstart your learning by: Reading

Enhance your coursework and find answers to the Python programming challenges you encounter. Read guidebooks, blog posts, and even other people’s open source code to learn Python and data science best practices – and get new ideas.

Automate The Boring Stuff With Python by Al Sweigart is an excellent and entertaining resource. But we've put together an entire list of data science ebooks that are totally free for you to check out, too. Highlights include:

Related skills: Work with databases using SQL

SQL is used to talk to databases to alter, edit, and reorganize information. SQL is a staple in the data science community, and we've written a whole article about why you need to learn SQL if you want a job in data.


Step 3: Learn Python Data Science Libraries

Unlike some other programming languages, in Python, there is generally a best way of doing something. The three best and most important Python libraries for data science are NumPy, Pandas, and Matplotlib.

We've put together a helpful guide to the 15 most important Python libraries for data science, but here are a few that are really critical for any data work in Python:

  • NumPy —  A library that makes a variety of mathematical and statistical operations easier; it is also the basis for many features of the pandas library.
  • pandas — A Python library created specifically to facilitate working with data, this is the bread and butter of a lot of Python data science work.
  •  Matplotlib — A visualization library that makes it quick and easy to generate charts from your data.
  • scikit-learn — The most popular library for machine learning work in Python.

NumPy and Pandas are great for exploring and playing with data. Matplotlib is a data visualization library that makes graphs like you’d find in Excel or Google Sheets.

Kickstart your learning by: Asking questions

You don’t know what you don’t know!

Python has a rich community of experts who are eager to help you learn Python. Resources like Quora, Stack Overflow, and Dataquest’s learner community are full of people excited to share their knowledge and help you learn Python programming. We also have an FAQ for each mission to help with questions you encounter throughout your programming courses with Dataquest.

Related skills: Use Git for version control

Git is a popular tool that helps you keep track of changes made to your code, which makes it much easier to correct mistakes, experiment, and collaborate with others.


Step 4: Build a Data Science Portfolio as you Learn Python

For aspiring data scientists, a portfolio is a must.

These projects should include work with several different datasets and should leave readers with interesting insights that you’ve gleaned. Some types of projects to consider:

  • Data Cleaning Project — Any project that involves dirty or "unstructured" data that you clean up and analyze will impress potential employers, since most real-world data is going to require cleaning.
  • Data Visualization Project — Making attractive, easy-to-read visualizations is both a programming and a design challenge, but if you can do it right, your analysis will be considerably more impactful. Having great-looking charts in a project will make your portfolio stand out.
  • Machine Learning Project — If you aspire to work as a data scientist, you definitely will need a project that shows off your ML chops (and you may want a few different machine learning projects, with each focused on your use of a different popular algorithm). 

Your analysis should be presented clearly and visually; ideally in a format like a Jupyter Notebook so that technical folks can read your code, but non-technical people can also follow along with your charts and written explanations.

Your portfolio doesn’t necessarily need a particular theme. Find datasets that interest you, then come up with a way to put them together. However, if you aspire to work at a particular company or industry, showcasing projects relevant to that industry in your portfolio is a good idea.

Displaying projects like these gives fellow data scientists an opportunity to potentially collaborate with you, and shows future employers that you’ve truly taken the time to learn Python and other important programming skills.

One of the nice things about data science is that your portfolio doubles as a resume while highlighting the skills you’ve learned, like Python programming.

Kickstart your learning by: Communicating, collaborating, and focusing on technical competence

During this time, you’ll want to make sure you’re cultivating those soft skills required to work with others, making sure you really understand the inner workings of the tools you’re using.

Related skills: Learn beginner and intermediate statistics

While learning Python for data science, you’ll also want to get a solid background in statistics. Understanding statistics will give you the mindset you need to focus on the right things, so you’ll find valuable insights (and real solutions) rather than just executing code.


Step 5: Apply Advanced Data Science Techniques

Finally, aim to sharpen your skills. Your data science journey will be full of constant learning, but there are advanced courses you can complete to ensure you’ve covered all the bases.

You’ll want to be comfortable with regression, classification, and k-means clustering models. You can also step into machine learning – bootstrapping models and creating neural networks using scikit-learn.

At this point, programming projects can include creating models using live data feeds. Machine learning models of this kind adjust their predictions over time.

Remember to: Keep learning!

Data science is an ever-growing field that spans numerous industries.

At the rate that demand is increasing, there are exponential opportunities to learn. Continue reading, collaborating, and conversing with others, and you’re sure to maintain interest and a competitive edge over time.

How Long Will It Take To Learn Python?

After reading these steps, the most common question we have people ask us is: “How long does all this take?”

There are a lot of estimates for how long takes to learn Python. For data science specifically, estimates a range from three months to a year of consistent practice.

We’ve watched people move through our courses at lightning speed and others who have taken it much slower.

Really, it all depends on your desired timeline, free time that you can dedicate to learn Python programming and the pace at which you learn.

Dataquest’s courses are created for you to go at your own speed. Each path is full of missions, hands-on learning and opportunities to ask questions so that you get can an in-depth mastery of data science fundamentals.

Get started for free. Learn Python with our Data Scientist path and start mastering a new skill today!

Where Can I Learn Python for Data Science?

There are tons of Python learning resources out there, but if you're looking to learn it for data science, it's best to choose somewhere that teaches about data science specifically. 

This is because Python is also used in a variety of other programming disciplines from game development to mobile apps. Generic "learn Python" resources try to teach a bit of everything, but this means you'll be learning quite a few things that aren't actually relevant to data science work.

Moreover, working on something that doesn't feel connected to your goals can feel really demotivating. If you want to be doing data analysis and instead you're struggling through a course that's teaching you to build a game with Python, it's going to be easy to get frustrated and quit.

There are lots of free Python for data science tutorials out there. If you don't want to pay to learn Python, these can be a good option — and the link in the previous sentence includes dozens, separated out by difficulty level and focus area.

If you're serious about it, though, it may be best to find a platform that'll teach you interactively, with a curriculum that's been constructed to guide you through your data science learning journey. Dataquest is one such platform, and we have course sequences that can take you from beginner to job qualified as a data analyst or data scientist in Python.

Is Python Necessary in the Data Science Field?

It's possible to work as a data scientist using either Python or R. Each language has its strengths and weaknesses, and both are widely-used in the industry. Python is more popular overall, but R dominates in some industries (particularly in academia and research).

To do data science work, you'll definitely need to learn at least one of these two languages. It doesn't have to be Python, but it does have to be one of either Python or R. 

(Of course, you'll also have to learn some SQL no matter which of Python or R you pick to be your primary programming language).

Is Python Better than R for Data Science?

This is a constant topic of discussion in data science, but the true answer is that it depends on what you're looking for, and what you like.

R was built with statistics and mathematics in mind, and there are amazing packages that make it easy to use for data science. It also has a very supporting online community.

Python is a much better language for all-around work, meaning that your Python skills would be more transferrable to other disciplines. It's also slightly more popular, and some would argue that it's the easier of the two to learn (although plenty of R folks would disagree).

Rather than reading opinions, check out this more objective article about how Python and R handle similar data science tasks, and see which one looks more approachable to you.

How is Python Used for Data Science?

Programming languages like Python are used at every step in the data science process. For example, a data science project workflow might look something like this:

  1. 1Using Python and SQL, you write a query to pull the data you need from your company database.
  2. 2Using Python and the pandas library, you clean and sort the data into a dataframe (table) that's ready for analysis.
  3. 3Using Python and the pandas and matplotlib libraries, you begin analyzing, exploring, and visualizing the data.
  4. 4After learning more about the data through your exploration, you use Python and the scikit-learn library to build a predictive model that forecasts future outcomes for your company based on the data you pulled.
  5. 5You arrange your final analysis and your model results into an appropriate format for communicating with your coworkers.

Python is used at almost every step along the way!

Charlie is a student of data science, and also a content marketer at Dataquest. In his free time, he’s learning to mountain bike and making videos about it.

The post How to Learn Python for Data Science In 5 Steps appeared first on Dataquest.

Categories: FLOSS Project Planets

Codementor: Is Java and Python similar?

Planet Python - Tue, 2020-08-11 10:55
In this article you are going to know about that is java and python similar or not
Categories: FLOSS Project Planets

SeExpr status update!

Planet KDE - Tue, 2020-08-11 10:10

Hey all!

It’s been quite a while since my last post. Exams for my teaching certification have not gone as expected – had to pull out after being flattened in quite a critical one…

Buuuut! I am glad to announce that the SeExpr documentation is now available in the Krita manual!

The SeExpr tutorial, available now.

Along with this tutorial, you will find:

And, the most important, a great set of examples is now packed along with every Krita build for you to test and play with.

I am planning additional code-wise goals, but none fits within the two weeks remaining for the program… so, I’ll use the remaining two weeks for something project-related. I’m glad to announce that I’ll be taking part in this year’s Akademy– not only as part of the Student Showcase, but with a talk of my own!

The talk is entitled “Integrating Hollywood Open Source with KDE Applications”, and in it I’ll tell you about the itty-gritty bits of this journey. More formally,

In this presentation, I will guide the audience through the pitfalls and challenges of integrating this library with the Krita codebase. Developers in the audience will be interested in learning how SeExpr’s build system, platform support, and dependencies were harmonized with Krita’s. The creation of its layer generator, and formal specification of its storage requirements, will also be addressed. Artists will be interested in how SeExpr’s UI and UX was adapted to suit their current workflow, as well as KDE’s accessibility and internationalization requirements.

The talk is scheduled for September 5 at 19:30 GMT.

Thank you all for supporting me, and see you at Akademy!

Cheers,

~amyspark

Categories: FLOSS Project Planets

Real Python: Identify Invalid Python Syntax

Planet Python - Tue, 2020-08-11 10:00

Python is known for its simple syntax. However, when you’re learning Python for the first time or when you’ve come to Python with a solid background in another programming language, you may run into some things that Python doesn’t allow. If you’ve ever received a SyntaxError when trying to run your Python code, then this guide can help you. Throughout this course, you’ll see common examples of invalid syntax in Python and learn how to resolve the issue.

By the end of this course, you’ll be able to:

  • Identify invalid syntax in Python
  • Make sense of SyntaxError tracebacks
  • Resolve invalid syntax or prevent it altogether

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PSF GSoC students blogs: Week 10

Planet Python - Tue, 2020-08-11 09:42

What did you do this week?

I finally fixed up all the bugs that were there. KML Overlay is totally functional and useful now! I also added extra features, which means that any kind of KML File can be parsed, displayed without crashing.

What will I do next week?

Ill finally start working on writing tests. The mentors have asked for some changes in the features, so I'll work on that too. Other than basic design customization, the KML Overlay is quite ready :))

Did I get stuck anywhere?

No, this week went by quite smoothly, and I got a lot of stuff ticked on my To-Do list :))

Categories: FLOSS Project Planets

Ned Batchelder: You should include your tests in coverage

Planet Python - Tue, 2020-08-11 08:17

This seems to be a recurring debate: should you measure the coverage of your tests? In my opinion, definitely yes.

Just to clarify: I’m not talking about using coverage measurement with your test suite to see what parts of your product are covered. I’ll assume we’re all doing that. The question here is, do you measure how much of your tests themselves are executed? You should.

The reasons all boil down to one idea: tests are real code. Coverage measurement can tell you useful things about that code:

  • You might have tests you aren’t running. It’s easy to copy and paste a test to create a new test, but forget to change the name. Since test names are arbitrary and never used except in the definition, this is a very easy mistake to make. Coverage can tell you where those mistakes are.
  • In any large enough project, the tests directory has code that is not a test itself, but is a helper for the tests. This code can become obsolete, or can have mistakes. Helpers might have logic meant for a test to use, but somehow is not being used. Coverage can point you to these problems.

Let’s flip the question around: why not measure coverage for your tests? What’s the harm?

  • “It skews my results”: This is the main complaint. A project has a goal for coverage measurement: coverage has to be above 80%, or some other number. Measuring the tests feels like cheating, because for the most part, tests are straight-line code executed by the test runner, so it will all be close to 100%.

    Simple: change your goal. 80% was just a number your team picked out of the air anyway. If your tests are 100% covered, and you include them, your total will go up. So use (say) 90% as a goal. There is no magic number that is the “right” level of coverage.

  • “It clutters the output”: Coverage.py has a --skip-covered option that will leave all the 100% files out of the report, so that you can focus on the files that need work.
  • “I don’t intend to run all the tests”: Some people run only their unit tests in CI, saving integration or system tests for another time. This will require some care, but you can configure coverage.py to measure only the part of the test suite you mean to run.

Whenever I discuss this idea with people, I usually get one of two responses:

  • “There are people who don’t measure their tests!?”
  • “Interesting, I had a problem this could have found for me.”

If you haven’t been measuring your tests, give it a try. I bet you will learn something interesting. There’s no downside to measuring the coverage of your tests, only benefits. Do it.

Categories: FLOSS Project Planets

PSF GSoC students blogs: GSoC: Week 11: InputEngine.add(paths)

Planet Python - Tue, 2020-08-11 07:59

Hello guys, 

What did I do this week?

After we added support for file paths in output. I have found out a bug which was breaking cve_scanner whenever we use --input-file flag for scanning CVEs from CSV or JSON file. I have also found out several other issues in the previous structures which is specified below: 

  1. Old CVEData was NamedTuple and since newly added path attribute was mutable it can create hard to find bugs. 
  2. To update path we need to scan all_cve_data to find product for which we want to append paths.
    Time Complexity: O(n**2) which can be reduced to O(n) using better structure.
  3. Throwing vendor, product, version in different function was decreasing readability. So, ProductInfo would be nice to pack this data together since we never need that alone.
  4. TriageData structure wasn't syncing with old CVEData. So, csv2cve or input_engine was breaking.

So, I have decided to change current structure to handle all these issues. Previously all_cve_data was Set[CVEData] which was sufficient then because all attributes are immutable in CVEData and we are just using set to remove duplicates from output. But, when we introduce paths attribute we need to change paths everytime we detect same product in different time and set doesn't have any easy way(Set isn't made for storing mutable type) to get value stored in it apart from looping over whole set to find what we are looking for. So, I have refactor structure into two parts: 1) immutable ProductInfo(vendor, product, version) and 2) mutable CVEData(list_of_cves, paths_of_cves). And I am storing mapping of ProductInfo and CVEData into all_cve_data so now we can access CVEData of a product without having to traverse whole all_cve_data. Also, I have moved all data structures into utils to avoid circular imports. I have also added test for paths.

What am I doing this week? 

I am continue to improve documentation of the code I generated like adding docstrings and comments. And I am also going to add requested how-to guides to improve User Experience. 

Have I got stuck anywhere?

No, I didn't get stuck this week.

Categories: FLOSS Project Planets

Specbee: Improving Drupal 9 Performance with modules, best coding practices and the right server configuration

Planet Drupal - Tue, 2020-08-11 07:34
Improving Drupal 9 Performance with modules, best coding practices and the right server configuration Pradosh 11 Aug, 2020 Top 10 best practices for designing a perfect UX for your mobile app

You could have the most powerful server with memory in heaps but is that enough to ensure a high-performing website? With Drupal, scaling the website in harmony with your business growth is easy. In fact, that is what Drupal is great at. However, a sudden rise in web pages, functionality and content could impact its performance. Drupal 9 is here now and is all geared-up to take on this challenge like a pro! It comes with the goodness of Drupal 8 minus the old code which makes it leaner, cleaner and more powerful. Explore more on Drupal 9 performance improvement techniques that absolutely work.

Performance of the website is the key to business success. Slow loading websites could be harmful for businesses. A website with better performance helps in better SEO, improve visitor’s conversion rate and provides better user experience to the visitors which collectively help in the growth of the business. While slow loading websites do quite the opposite and become the reason for the business failure.

 

There are many things that affect the website performance. Some of them are:

•    Your service provider (Hosting, DNS etc.)
•    Number of requests to the server
•    Technical issues or bad programming practices
•    Caching technique
•    Improper server configuration
•    Heavy image and video files

Drupal 9 Core and Custom Modules to boost Performance 

There are many available contributed and core modules in Drupal 9 which can be helpful in improving your website’s performance. By following certain coding practices and with proper server configuration, you can drastically improve the site performance.

Core Modules
  •    Big Pipe

The Drupal Big Pipe module makes things faster without extra configuration. It comes packaged with Drupal core. It improves frontend perceived performance by using cacheability metadata and thus improving the rendering pipeline.

•    Internal Dynamic Page Cache

This Drupal 9 module helps to cache dynamic content. It is helpful for both anonymous & authenticated users. This module is not available in Drupal 7. Pages requested by users are stored the first time they are requested and can then be reused when the same page is further requested.


•    Internal Page Cache

The Internal Page Cache module helps to cache data for anonymous users. This module is available in core and is enabled by default. 

Configuration Path:  admin/config/development/performance

Here you can clear cache, set browser and proxy cache maximum age and enable / disable aggregation settings.

                     Internal Page Cache Module  Contributed Modules
  • Advanced CSS/JS Aggregation

    The Advagg module comes packed with many other submodules, such as -
  • AdvAgg Cdn: Helps to load assets (CSS/JS) from public CDN
  • AdvAgg CSS/JS Validator: Validates CSS and JS file
  • AdvAgg External Minifier: Minifies Javascript and/or CSS with a command line minifier.
  • AdvAgg Minify CSS : Helps in minify css files with 3rd party minifier
  • AdvAgg Minify JS : Helps in minify js files with 3rd party mi
  • AdvAgg Modifier : Allows one to alter the CSS and JS array. (May have compatibility issue)
  • AdvAgg Old Internet Explorer Compatibility Enhancer

Configuration Path: /admin/config/development/performance/advagg

This module also supports file compression techniques like gzip and brotli. This module helps in reducing the number of http requests, thus improving the site performance significantly.

  • Blazy

The Drupal 9 Blazy module provides lazy loading of images to save the bandwidth and avoid higher bounce rates. Lazy loading is a technique that loads images only when in the visible area to the user. This multi-serve technique saves time and data. 

Configuration path: /admin/config/media/blazy

Here you can enable/disable Blazy, configure placeholder effect and can also set the offset which determines how early the image will be visible to the user.


                              Blazy module
  • CDN

The Drupal 9 CDN module helps in easy integration of CDN in Drupal websites. It helps to serve static content from the CDN server to increase the speed of content delivery. Other than that, this module is also easy to configure.

Configuration path: /admin/config/services/cdn

           CDN Module Settings

Here you can enable/disable the CDN, provide mapping URL and check/uncheck forever file caching.

Performance Improvement with Best Coding Practices
  • Using isset() over array_key_exist()

isset() method is significantly faster than array_key_exist(). The main difference between isset and array_key_exist is that array_key_exists will definitely tell you if a key exists in an array. Whereas isset will only return true if the key/variable exists and is not null. For more information on this check here for benchmark comparison.

  • Using entityQuery()

entityQuery() depends on a storage controller to handle building and executing the query for the appropriate entity storage. This has the advantage that any query run through entityQuery() is storage independent. So, if you’re writing a contributed module or working on a website where it might be necessary to move to an alternative entity storage in the future, all your queries will transparently use the new storage backend without any changes needed. entityQuery() can be used whether you’re writing queries by hand in custom code or via the entityQuery() Views backend.

  • Using loadMultiple() method instead of looping

If you have 10 nids (node ids) and you’re looping through it to load each node, you are making 10 queries to the database. While using loadMultiple() it is reduced to just one database query. 

 

  • Caching

Using Cache API in Drupal 9 you can cache the renderer, response array or object. There are three renderability caching metadata available in Drupal 9.

  1. Cache tags

    The Cache tags are used to cache data when it depends upon Drupal entities or configurations. Syntax for this is cache-item:identifier e.g. node:5, user:3.
  2. Cache context

    Syntax:
    •    periods separate parents from children
    •    a plurally named cache context indicates a parameter may be specified; to use: append a colon
                    Example: user.roles, user.roles:anonymous, etc.
  3. Cache max-age        

          Cache max-age is used to cache time sensitive data.

  • Queue worker / Batch

To process large amounts of data without php time out, batch processing or queue worker can be used. Items in queue worker runs only when the cron runs and it runs for a small amount of time. There are two types of queue workers: reliable and unreliable. Reliable queue worker ensures that the item in the queue runs at least once, whereas an unreliable queue may skip items due to memory failure or for other interruption. Batch processing processes the items till all the items finish in the batch conditioned so that no error should occur during the processing without waiting for the cron run.

Improving Performance with better Server Configuration
  • Using Nginx instead of Apache

Nginx and Apache, both are widely used web servers. Nginx has an edge over Apache on performance benchmark. It is also faster and more efficient than apache. Nginx performs 2.5 times faster than Apache according to a benchmark test running up to 1,000 simultaneous connections.

  • HTTP/2.0 over HTTP/1.1

HTTP/2.0 supports multiplexing, that is unlike HTTP/1.1 which blocks other resources. If one resource cannot be loaded, HTTP/2.0 uses TCP connection to send multiple streams data at once. HTTP/2.0 uses advanced header compression techniques than HTTP/1.1

Nginx configuration for HTTP/2.0 server { listen 443 ssl http2; //http2 settings ssl_certificate server.crt; ssl_certificate_key server.key; }
  • Serving Compressed Content
Compressing responses often significantly reduces the size of transmitted data. However, since compression happens at runtime, it can also add considerable processing overhead which can negatively affect performance. Nginx configuration to serve compressed content: server { gzip on; gzip_static on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_proxied any; gzip_vary on; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; ... }
  • MariaDB instead of MySQL

mariaDB has improved speed as compared to MySQL. It provides faster caching and indexing than MySQL. It is almost 24% faster than MySql in this case. There are other key metrics also where mariaDB is better than MySQL. So, MariaDb is preferred over MySQL in terms of performance.

  • CDN

CDN stands for content delivery network. It is a cluster of servers spread across the globe (a.k.a., points of presence, or PoPs), which works together to deliver the content faster. CDN stores the cached version of the site content and delivers the content from the nearest available server. Some of the popular CDN providers are Cloudflare, Amazon cloudfront, Google cloud cdn etc.
 

Drupal 9 is persistent on continuous innovation and carries forward significant features from Drupal 8. The codebase is now cleaner and more lightweight. It’s compatibility with the latest modern web technologies and libraries have enabled organizations to build better, more powerful digital experiences. However, applying and leveraging the best of the techniques, modules and coding practices can help maximize your efforts in performance improvement for a Drupal website. At Specbee, we are committed to providing our customers with high-performing websites only. Contact us today to know how we can help you with your Drupal project.

Drupal Planet Drupal Development Drupal Tutorial Drupal Module Shefali ShettyApr 05, 2017 Subscribe For Our Newsletter And Stay Updated Subscribe

Leave us a Comment

  Shefali ShettyApr 05, 2017 Recent Posts Image Improving Drupal 9 Performance with modules, best coding practices and the right server configuration Image Top Drupal 8 (and 9) Modules for Intuitive Website Navigation Image Drupal Pathauto Module - A Brief Tutorial on how to Automatically Generate Bulk URL Aliases in Drupal 8 Want to extract the maximum out of Drupal? TALK TO US Featured Success Stories

Know more about our technology driven approach to recreate the content management workflow for [24]7.ai

link

Find out how we transformed the digital image of world’s largest healthcare provider, an attribute that defined their global presence in the medical world.

link

Discover how a Drupal powered internal portal encouraged the sellers at Flipkart to obtain the latest insights with respect to a particular domain.

link
Categories: FLOSS Project Planets

Week 9 and 10 : GSoC Project Report

Planet KDE - Tue, 2020-08-11 06:10

Last two weeks I worked on implementing saving and loading of storyboard items and fixed some bugs. For implementing saving and loading I created a copy of the data from the models in KisDocument. That data is kept in sync with the data in models.

Saving and loading of storyboard items are working now. You can save a krita document with storyboards in it and the storyboard data will be saved. Thumbnails are not saved into the .kra file but are loaded using the frame number when the document is loaded. Other than that all data related to the storyboard such as scene name, comments, duration are saved. Since the data is in KisDocument we will have storyboards for each of the .kra files.

I worked on the Export dialog GUI and implemented some of the functions. The user can choose the range of items to render. The layout of the document to be exported can be decided either using the custom options i.e. Rows, Columns and Page Size or it can be specified using an SVG file. On clicking the “Specify layout using SVG file” button the a dialog to choose file would be created to choose the layout file. If the SVG file is selected the custom layout options would be disabled and cannot be changed as they are of no use. On clicking the Export button the user would get to choose the file name and location of the export file.

Other than that I fixed some bugs and changed tests to match new changes made. Also I wrote code documentation for most of the parts implemented till now.

This week I will work on creating a layout for exporting, specifying layout using SVG file and then maybe work on the actual exporting part as well. Other than that I plan to improve on the user documentation based on feedback received.

Categories: FLOSS Project Planets

Andre Roberge: Rich + Friendly-traceback: first look

Planet Python - Tue, 2020-08-11 03:58

 After a couple of hours of work, I have been able to use Rich to add colour to Friendly-traceback. Rich is a fantastic project, which has already gotten a fair bit of attention and deserves even more.

The following is just a preview of things to come; it is just a quick proof of concept.


Friendly-traceback has 10 so-called verbosity settings, one of which is simply to show the normal Python traceback.

There is, of course, much, much more to come ...

Update: more work in progress





Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check-In #6 (2nd Aug - 9th Aug)

Planet Python - Tue, 2020-08-11 03:34

So we have almost reached the end of the program and it was a fun learning experience.

What did you do this week ?
We were looking to restructure some core chunks of the code to allow easier testing and future contribution. Some developments were made in this direction , identifying possible problems and all. Apart from that a basic support for ordinal numbers (for English language only) was added which will make it more useful when it is integrated with date-parser.

Did you get stuck anywhere ?
The restructuring part was quite tough because with the current logic - it's hard to restructure the code in the necessary logical flow without completely revamping everything. Hence for the time-being we will let it be in the current form.

What is coming up next ?
The plan for the next week is to incorporate number-parser with date-parser. To achieve this we need to have auto-language detection in the number-parser code. Currently you need to supply a mandatory language parameter which we will do away with.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check-In #11

Planet Python - Tue, 2020-08-11 03:06
What I did this week?

This week I added checkers for tcpdump and qt libraries. 

What will I be doing this week?

I will be working on updating documentation in the next week.

Did I get stuck anywhere?

While adding checker for qt library, I missed a vendor product pair, which was rightly pointed out by Terri. Other than this, everything else worked out.

Categories: FLOSS Project Planets

Drupal.org blog: What's new on Drupal.org? - Special DrupalCon Edition 2020

Planet Drupal - Mon, 2020-08-10 20:40

Read our roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community. You can also review the Drupal project roadmap.

It's hard to believe that our last published update was back in April of 2020. Time seems to simultaneously crawl and disappear in the time of quarantine. All hands were on deck at the Drupal Association to facilitate the transition of DrupalCon from an in-person event to a virtual conference, and those efforts were rewarded:

The first DrupalCon Global event exceeded all expectations. Creating a 100% virtual event which captures the spirit of DrupalCon is no small task, but thanks to our sponsors and supporting partners we were able to set the benchmark for virtual open source conferences.

DrupalCon wasn't the only thing we worked for the past three months, of course, and we have a number of phenomenal updates that we'd love to share with you.

In fact, we spoke about this work during the Drupal.org Engineering Panel at DrupalCon Global, so what better way to update you than with that recording!

Highlights Looking for the TLDR? 

Don't worry - we know you're busy - here's the short version: 

  • We recapped the success of the #DrupalCares campaign, and reiterated our #DrupalThanks to all the contributors who made it possible to refocus on our mission work.
  • We shared new metrics available on https://www.drupal.org/metrics that help us understand community activity, project health, and community diversity.
  • We showed off the new community event listings - the beginning of a more modern replacement for the event-organizing features of groups.drupal.org.
  • We showed off the tools built for the new contributor guide - which are now in use by the community to help match contributors and their skills to what's needed for the DRupal project.
  • Drupal 9 released, not only on time, but in it's very first potential release window - on June 3rd, 2020. This is an incredible testament to the work of the community, and in particular the core maintainers.
  • We launched the beta for merge requests integrated into Drupal.org - you can opt in the projects you maintain.
  • We announced that Drupal Steward program was going live! And we're looking for 10-30 initial site owners ready to sign up, to help us refine our onboarding process.
But that's not all!  Community contributed Drupal.org improvements

In addition to all the work that the Engineering team has done in recent months, the community has also swung into action to provide some great improvements to Drupal.org. 

Categories: FLOSS Project Planets

spikelantern: The best frontend JavaScript framework for Django

Planet Python - Mon, 2020-08-10 20:00

A question I've seen asked a lot is "what's the best frontend JavaScript framework to use with Django".

Django itself doesn't make any recommendation on which frontend framework to use, or even assumes you're using a frontend framework at all.

So, which frontend framework should you be using? And which one "plays well" with Django?

Defining "the best"

The problem with such questions is that "the best" is often ill-defined. Best based on what criteria?

If you're starting a new project and wondering which one to choose, the disappointing answer I have for you is this:

The best frontend JavaScript framework is one you already know well.

That's it. If you, or your team, wants to use a frontend framework, the best one to use is the one that you and your team have the most familiarity with. That's because it's very likely that you will be more productive with it.

Django "plays well" with any frontend JavaScript framework, by virtue of the fact that it makes no assumptions and does not force you to use any framework. In that respect, it's all the same. At the end it's all JavaScript.

The actual question

But of course, that's not what you're asking.

If you already know a frontend JavaScript framework well, then chances are you wouldn't be asking this question.

It's likely you're asking this because you aren't familiar with any of them, and are wondering which one to learn.

There are a few factors to consider.

Choosing a frontend framework to learn

Ultimately, whichever framework you choose to learn, you want to actually succeed at learning it.

That means, to set yourself up for success you want one that has these properties:

  • Many good resources, especially for beginners
  • A large, welcoming community
  • One that is fairly mature, and isn't likely to change drastically within the next 6 months

Luckily for you, most mainstream frameworks today have all of these properties. React, Vue, and Angular all have very good resources, have large community support, and have APIs that are fairly mature. React, Vue, and Angular are all good choices to learn.

You can eyeball some of the tutorials for a framework to see which one works best for you. Do most of the tutorials contain a lot of esoteric jargon that you have trouble understanding? That's a sign that the community has yet to mature to an extent where it's welcoming to beginners, or the framework just has other goals that are incompatible with yours. That's alright, you can choose another framework and/or community that is more suited to you.

Also, I'd recommend avoiding very new frameworks. They might be good frameworks, but new frameworks tend to change a lot in their first couple of years. That means whatever you learn today might change drastically in a year's time.

Another thing to consider is how big the community is. If it's a mainstream framework like React, then it will have very large community support and is often backed by companies, and therefore is not likely to go away. Whereas if you're looking at a framework maintained by a single developer, there's a possibility that that single developer might just stop maintaining it. Don't roll the dice.

Other goals

You probably aren't learning this "for fun", but you may have other goals such as "getting a job".

Anecdotally, most employers, at least in my city, just expect familiarity in any JavaScript framework, rather than one in particular. What's usually more important is solid JavaScript skills. So, I wouldn't worry too much about this.

However, it's also true that you can be strategic about this by learning a framework that most employers use, so that you don't need to switch frameworks.

Check the job boards in your city, and look at what employers are looking for. You could also look at other cities.

In Sydney, where I currently live, React seems to be the most common framework used by companies, followed by Angular. There are also a few jobs for Vue developers, especially where the backend is Laravel (which endorses Vue as the framework of choice). This might be different in your city.

Still don't know what to choose?

If you're still not sure, I'd recommend just choosing React.

At the time of writing, React has arguably the biggest community among all frameworks, has tons of good resources, and is fairly mature. Furthermore, it's very popular among both startups and larger organisations. It's a good and safe option.

And since it's so popular, the likelihood of you ending up in a job where you need to deal with React is very high. That means it makes sense to learn it anyway, even if you end up choosing another framework as your "main".

So, if you're in doubt, just choose React. It is unlikely that this will turn out to be a bad choice.

Categories: FLOSS Project Planets

Pages