Feeds

PSF GSoC students blogs: Weekly Check-in #11

Planet Python - Mon, 2020-08-10 15:49

<meta name="uuid" content="uuidELJwTIXtsKXb"><meta charset="utf-8">

What did I do this week?

Added test with multiple worker nodes. I started working on the input network.

What's next?

I'll be continuing to work on distributed orchestrator, specifically adding input network to orchestrator node.

Did I get stuck somewhere?

No.

Categories: FLOSS Project Planets

Erik Marsja: How to Perform a Two-Sample T-test with Python: 3 Different Methods

Planet Python - Mon, 2020-08-10 14:28

The post How to Perform a Two-Sample T-test with Python: 3 Different Methods appeared first on Erik Marsja.

In this Python tutorial, you will learn how to perform a two-sample t-test with Python. First, you will learn about the t-test including the assumptions of the statistical test. Following this, you will learn how to check whether your data follow the assumptions. 

After this, you will learn how to perform an two sample t-test using the following Python packages:

  • Scipy (scipy.stats.ttest_ind)
  • Pingouin (pingouin.ttest)
  • Statsmodels (statsmodels.stats.weightstats.ttest_ind)
  • Interpret and report the two-sample t-test
    • Including effect sizes

Finally, you will also learn how to interpret the results and, then, how to report the results (including data visualization). 

Prerequisites

Obviously, before learning how to calculate an independent t-test in Python, you will have at least one of the packages installed. Make sure that you have the following Python packages installed:

  • Scipy
  • Pandas
  • Seaborn
  • Pingouin (if using pingouin.ttest)
  • Statsmodels (if using statsmodels.stats.weightstats.ttest_ind)
Scipy

Scipy is an essential package for data analysis in Python and is, in fact, a dependency of all of the other packages used in this tutorial. In this post, we will use it to test one of the assumptions using the shapiro-wilks test. Thus, you will need Scipy even though you use one of the other packages to calculate the t-test. Now, you might wonder why you should bother using any of the other packages for your analysis. Well, the ttest_ind function will return the t- and p-value whereas (some) of the other packages will return more values (e.g., the degrees of freedom, confidence interval, effect sizes) as well. 

Pandas

Pandas will be used to import data into a dataframe and to calculate summary statistics. Thus, you will need this package to follow this tutorial.

Seaborn

If you want to visualize the different means and learn how to plot the p-values and effect sizes Seaborn is a very easy data visualization package.

Pingouin

This is the second package used, in this tutorial, to calculate the t-test. One neat thing with the ttest function, of the Pingouin package, is that it returns a lot of information we need when reporting the results from the statistical analysis. For example, using Pingouin we also get the degrees of freedom, Bayes Factor, power, effect size (Cohen’s d), and confidence interval.

Statsmodels

Statsmodels is the third, and last package, used to carry out the independent samples t-test. You do not have to use and, thus, this package is not required for the post. It does, however, contrary to Scipy, also return the degrees of freedom in addition to the t- and p-values.

Installing the Needed Python Packages

Now, if you don’t have the required packages they can be installed using either pip or conda (if you are using Anaconda). Here’s how to install Python packages with pip:

pip install scipy numpy seaborn pandas statsmodels pingouin

If pip is telling you that there is a newer version, you can learn how to upgrade pip.

Using Pip to Install all Packages

If you are using Anaconda here’s how to create a virtual environment and install the needed packages:

conda create -n 2sampttest conda activate 2sampttest conda install scipy numpy pandas seaborn statsmodels pingouin

Obviously, you don’t have to install all the prerequisites of this post and you can refer to the post about installing Python packages if you need more information about the installation process.

Two Sample T-test

The two sample t-test is also known as the independent samples, independent, and unpaired t-test. Moreover, this type of statistical test compares two averages (means) and will give you information if these two means are statistically different from each other. The t-test also tells you whether the differences are statistically significant. In other words it lets you know if those differences could have happened by chance.

Example: clinical psychologists may want to test a treatment for depression to find out if the treatment will change the quality of life. In an experiment, a control group (e.g., a group who are given a placebo, or “sugar pill”, or in this case no treatment) is always used. The control group may report that their average quality of life is 3, while the group getting the new treatment might report a quality of life of 5. It would seem that the new treatment might work. However, it could be due to a fluke. In order to test this, the clinical researchers can use the two-sample t-test.

Hypotheses

Now, when performing t-tests you typically have the following two hypotheses:

  1.     Null hypotheses: Two group means are equal
  2.     Alternative hypotheses: Two group means are different (two-tailed)

Now, sometimes we also may have a specific idea about the direction of the condition. That is, we may, based on theory, assume that the condition one group is exposed to will lead to better performance (or worse). In these cases, the alternative hypothesis will be something like: the mean of one group either greater or lesser than another group (one-tailed).

Assumptions

Now, besides that the dependent variables are interval/ratio, and are continuous, there are three assumptions that need to be met.

  • Assumption 1: Are the two samples independent?
  • Assumption 2: Are the data from each of the 2 groups following a normal distribution?
  • Assumption 3: Do the two samples have the same variances (Homogeneity of Variance)?
Example Data

First, before going on to the two-samples t-test examples, we need some data to work with. In this blog post, we are going to work with data that can be found here. Furthermore, here we will import data from an Excel (.xls) file directly from the URL.

Importing Data from CSV import pandas as pd data = 'https://gist.githubusercontent.com/baskaufs/1a7a995c1b25d6e88b45/raw/4bb17ccc5c1e62c27627833a4f25380f27d30b35/t-test.csv' df = pd.read_csv(data) df.head()

In the code chunk above, we first imported pandas as pd. Second, we created a string with the URL to the .csv file. In the fourth row, we used Pandas read_csv to load the .csv file into a dataframe. Finally, we used the .head() method to print the first five rows:

Example Dataframe

As can be seen in the image above, we have two columns (grouping and height). Luckily, the column names are eas to work with when we, later, are going to subset the data. If we, on the other hand, had long column names renaming columns in the Pandas dataframe would be wise.

Subsetting the Data

Finally, before calculating some descriptive statistics, we will subset the data. In the code below, we use the query method to create two Pandas series objects:

# Subset data male = df.query('grouping == "men"')['height'] female = df.query('grouping == "women"')['height']

In the code chunk above, we first subset the rows containing men, in the column grouping. Subsequently, we do the exact same thing for the rows containing women. Note, that we are also selecting only the column named ‘height’ (i.e., the string within the brackets). Now, using the brackets and the column name as a string is one way to select columns in Pandas dataframe. Finally, if you don’t know the variable names, see the post How to Get the Column Names from a Pandas Dataframe – Print and List, for more information on how to get this information.

Descriptive Statistics

Now, we are going to use the groupby method together with the describe method to calculate summary statistics. Note, that here we use the complete dataframe:

df.groupby('grouping').describe()

As we are interested in the difference between ‘A’ and ‘B’, in the dataset, we used ‘grouping’ as input to the groupby method. If you are interested in learning more about grouping data and calculating descriptive statistics in Python, see the following two posts:

In the next section, you will finally learn how to carry out a two-samples t-test with Python.

How to Check the Assumptions of the Two-Samples T-test in Python

In this section, we will cover how to check the assumptions of the independent samples t-test. Of course, we are only going to check assumption 2 and 3. That is, we will start by checking whether the data from the two groups are following a normal distribution (assumption 2). Second, we will check whether the two populations have the same variance. 

Checking the Normality of Data

There are several methods to check whether our data is normally distributed. Here, we will use the Shapiro-Wilks test. Here’s how to examine if the data follow the normal distribution in Python:

stats.shapiro(male) # Output: (0.9550848603248596, 0.7756242156028748) stats.shapiro(female) # Output: (0.9197608828544617, 0.467536598443985)

In the code chunk above, we performed the Shapiro-Wliks test on both Pandas series (i.e., for each group seperately). Consequently, we get a tuple, for each time we use the shapiro method. This tuple contains the test statistics and the p-value. Here, the null hypothesis is that the data follows a normal distribution. Thus, we can infer that the data from both groups is normally distributed.

Now, there are of course other tests, see this excellent overview, for information. Finally, it is also worth noting that most statistical tests for normality is sensitive for large samples. Normality can also be explored visually using histograms, q-q plots, to name a few. See the post How to Plot a Histogram with Pandas in 3 Simple Steps.

Checking the Homogeneity of Variance Assumption in Python

We’ll use Levene’s test to test for homogeneity of variances (equal variances) and this can be performed with the function levene as follow:

stats.levene(male, female) # Output: LeveneResult(statistic=0.026695150465104206, pvalue=0.8729335280501348)

Again, the p-value suggests that the data follows the assumption of equal variance. See this article for more information. Here are some options to Levene’s test of homogeneity:

It is worth noting here, that if our data does not fulfill the assumption of equal variance, we can use Welch’s t-test instead of Student’s t-test. See the references at the end of the post.

How to Carry Out a Two-Samples T-test in Python in 3 Ways

In this section, we are going to learn how to perform an independent samples t-test with Python. To be more exact, we will cover three methods: using SciPy, Pingouin, and Statsmodels. First, we will use SciPy:

1 T-test with SciPy 

Here’s how to carry out a two-samples t-test using SciPy:

res = stats.ttest_ind(male, female, equal_var=True) display(res)

In the code chunk above, we used the ttest_ind method to carry out the independent samples t-test. Here, we used the Pandas series’ we previously created (subsets), and set the equal_var parameter to True. If we, on the other hand, have data that is violating the second assumption (equal variances) we should set the equal_var parameter to False.

2 Two-Samples T-Test with Pingouin

To carry out a t-test using the Python package Pingouin you just do as follows:

import pingouin as pg res = pg.ttest(male, female, correction=False) display(res)

In the code chunk above, we started by importing pingouin as pg. Following this, we carried out the statistical analysis (i.e., using the ttest method). Noteworthy, here we set the correction to False as we want to carry out Student’s t-test. If the data were violating the homogeneity assumption, we should set the correction to True. This way we would carry out the Welch’s T-test instead.

3 T-test with Statsmodels

Finally, if you prefer to use the Statsmodels package here’s how to carry out an independent samples t-test:

from statsmodels.stats.weightstats import ttest_ind ttest_ind(male, female)

In the code chunk above, we imported the ttest_ind method to carry out our data analysis. All the three methods, described in this post, requires that you already have imported Pandas and used it to load your dataset.

How to Interpret the Results from a T-test

In this section, you are briefly going to learn how to interpret the results from the two-samples t-test carried out with Python. Furthermore, this section will focus on the results from Pingouin and Statsmodels as they give us a more rich output (e.g., degrees of freedom, effect size). Finally, following this section you will further learn how to report the t-test according to the guidelines of  American Psychological Association. 

Interpreting the P-value

Now, the p-value of the test is 0.017106, which is less than the significance level alpha (e.g., 0.05). Furthermore, this means that we can conclude that the men’s average height is statistically different from the female’s average height. 

Specifically, a p-value is a probability of obtaining an effect at least as extreme as the one in the data you have obtained (i.e., your sample), assuming that the null hypothesis is true. Moreover, p-values address only one question which is concerned about how likely your collected data is, assuming a true null hypothesis? Importantly, it cannot be used as support for the alternative hypothesis.

Interpreting the Effect Size (Cohen’s D)

One common way to interpret Cohen’s D that is obtained in a t-test is in terms of the relative strength of e.g. the condition. Cohen (1988) suggested that d=0.2 should be considered a ‘small’ effect size, 0.5 is a ‘medium’ effect size, and that 0.8 is a ‘large’ effect size. This means that if two groups’ means don’t differ by 0.2 standard deviations or more, the difference is trivial, even if it is statistically significant.

Interpreting the Bayes Factor from Pingouin

Now, if you used Pingouin to carry out the two-sample t-test you might have noticed that we also get the Bayes Factor.  See this post for more information.

Reporting the Results

In this section, you will learn how to report the results according to the APA guidelines. In our case, we can report the results from the t-test like this:

There was a significant difference in height for men (M = 179.87, SD = 6.21) and women (M = 171.05, SD = 5.69); t(12) = 2.77, p = .017, %95 CI [1.87, 15.76], d = 1.48.

In the next section, you will also quickly learn how to visualize the data in two different ways: boxplots and violin plots.

Visualize the Data using Boxplots:

One way to visualize data from two groups is using the box plot:

import seaborn as sns sns.boxplot(x='grouping', y='height', data=df)

In the code chunk above, we imported seaborn (as sns), and used the boxplot method. First, we put the column that we want to display separate plots for on the x-axis. Here’s the resulting plot:

Visualize the Data using Violin Plots:

Here’s another way to report the results from the t-test: adding a violin plot to the report/manuscript:

import seaborn as sns sns.violinplot(x='grouping', y='height', data=df)

As when creating the box plot, we import seaborn and add the columns/variables we want as x- and y-axis’. Here’s the resulting plot:

More on data visualization with Python:

All the code examples in this post can be found in this Jupyter Notebook. Now, if you run this, make sure you have all the needed packages installed in your virtual environment.

Other Data Analysis Methods in Python

Finally, there are of course other ways to analyze your data. For instance, you can use Analysis of Variance (ANOVA) if there are more than two groups in the data. See the following posts about how to carry out ANOVA:

Recently, there has been a growing interest in machine learning methods and you can see the following posts for more information:

Summary

In this post, you have learned 3 methods to perform a two-samples t-test.Specifically, in this post you have learned how to install, and use, three Python packages that can be used for data analysis. Furthermore, you have learned how to interpret and report the results from this statistical test. Below, you will find some useful resources and references if you want to learn more. As far as I am concerned, the Python package Pingouin will give you the most comprehensive result and that’s the package I’d choose.

Additional Resources and References

Here are some useful peer-reviewed articles, blog posts, and books. Refer to these if you want to learn more about the t-test, p-value, effect size, and Bayes Factors. 

Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.
Independent Samples T-Test (Paywalled)
Interpreting P-values.
It’s the Effect Size, Stupid – What effect size is and why it is important
Using Effect Size—or Why the P Value Is Not Enough.
Beyond Cohen’s d: Alternative Effect Size Measures for Between-Subject Designs (Paywalled).
A tutorial on testing hypotheses using the Bayes factor.

The post How to Perform a Two-Sample T-test with Python: 3 Different Methods appeared first on Erik Marsja.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Extending CNNs beyond classification - Weekly Check-in 11

Planet Python - Mon, 2020-08-10 14:05

End of Week 9  - 03/08/2020


What did you do this week?

Iterating on last week's trial and error work on making it easier to add and connect Neural Networks (using the PyTorch library), this week, I made some progress on that part! After discussions with the mentors, I devised a way to create and connect Neural networks that will make playing with neural networks easy and accessible to everyone! I am currently working on extending the applications of CNNs beyond classification and have support for various tasks that use CNNs. 

What is coming up next?

I will continue working on Computer Vision related Custom Neural Networks and work on making the code flexible so as to not be confined to only classification applications of CNNs which is currently the case in the DFFML PyTorch model plugin.

Did you get stuck anywhere?

Yes, I got stuck at finding a way to make adding the neural networks using JSON and YAML files as flexible as possible so as to make adding neural networks more user friendly. I will discuss my doubts and ideas to tackle this in the next Weekly Sync meeting with the mentors.


Thank you for reading!

Categories: FLOSS Project Planets

PSF GSoC students blogs: GSoC Weekly Check-In #6

Planet Python - Mon, 2020-08-10 13:43

What did I do this week?

I have finally completed all my the functionality that I had proposed in my GSoC proposal. My PR was also accepted. I finished my work by adding the merge conflict dialog which pops up when saving or fetching waypoints.

What will I do next week?

This week is mostly testing and bug fixing. I have to complete the tests for the merge window then I also have a list of small bugs which need to be fixed. My mentors are also now testing the whole application on their development server to find any new bugs as they work with mscolab. There is one more feature that I might add this week after discussion with my mentors in our weekly meeting.

Did I get stuck anywhere?

No, this week went by smoothly.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check-in #11

Planet Python - Mon, 2020-08-10 13:07

What did I do this week? 

I continued my work on documentation.

What's next? 

Will try to wrap up the operations documentation by this week. 

Did I get stuck somewhere?

No,  everything is going smoothly.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Sixth Check-In

Planet Python - Mon, 2020-08-10 12:53

Hello there!

What did I do last week?

It has been a quite fun week for me, given the current state of development and the newly dicovered bugs thanks to pip 20.2 release:

  • Initiate discussion with the maintainers of pip on isolating networking code for late download in parallel (GH-8697)
  • Discuss the UI of parallel download (GH-8698)
  • Log debug information relating lazy wheel decision (GH-8710)
  • Disable caching for range requests (GH-8716)
  • Dedent late download logs (GH-8722)
  • Add a hook for batch downloading (third attempt I think) (GH-8737)
  • Test hash checking for fast-deps (GH-8743)
Did I get stuck anywhere?

Not exactly, everything is going smoothly and I'm feeling awesome!

What is coming up next?

I'll try to solve GH-8697 and GH-8698 within the next few days. I am optimistic that the parallel download prototype will be done within this week.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Pagination, Privacy Policy, Bug Fixing and Testing in the User Story system in GSOC’20

Planet Python - Mon, 2020-08-10 12:38

I started this week with a heavy heart. I entered the last coding phase of the best experience of my life. I made my mind that I will keep contributing to this project and organization and make the most of this awesome learning opportunity. I will not let it end here. I can already imagine a smile on my mentors face as they read this. You are the best people I have ever met and worked with. Now lets jump to my magic work. :)

What did I do this week?

“Many solutions will work but keep solving the problem till you optimize it and reach the best possible solution.” I do not know which genius said this but I my mind got stuck with this and I decided to spend some time to add pagination for our beloved users. This means users can reach any page with a click and we do not have to load all stories in one go. Imagine our free Heroku servers burning on hitting the API to load 500+ stories in one go. LOL!

The most important thing before going live was telling users how we are going to use their data. I worked on the privacy policy page and created a new model in the back end so that admins can manage its contents and update the privacy policy when needed.

This also means that we have to notify users of a privacy policy update. I wrote a custom lifecycle hook in Strapi which automatically creates a notification when the old policy is deleted and a new one is created. This special notification is displayed as a popup modal on the home page for users who are already logged in. They have the option to accept the new policy. If they do not accept then they can continue using our system but are logged out. This means they can only read existing stories.

New users will not see the modal as they can access the updated policy when they register into our system.

After pushing these heavy changes to production I fixed some bugs related to the my stories page and pagination not working along with product filters.

Finally I studied Cypress docs for some time. I set up Cypress for our client side so that we can write, run and debug tests easily.

What is coming up next?

I have identified some bugs on the client side and I will spend some time to fix them.

I will try to give the remaining time for writing tests as they are really important to future proof our application.

I have a lot of exciting new features in mind like Slack integration, email notifications etc and I will add them only if time permits.

I understand that the best things take time and patience and I will keep adding a thousand more features to user story even after GSOC. :)

Did I get stuck anywhere?

It is natural to get stuck while trying to do the best things.

I got stuck at multiple places while trying to make the pagination feature work along with other filters. Covering all possible cases for the privacy policy model was also a bit tricky. Discovery of new bugs might be a bit easy but they can be difficult to solve. Cypress and software testing in general was absolutely new to me.

I do not give up easily and kept trying to overcome all obstacles by the end of the week. All my efforts were really worth it. I learnt a lot.

I am also blessed to have such awesome mentors for support and motivation. They work really hard everyday and I learn a lot just by observing them. :)

Categories: FLOSS Project Planets

KDE neon Rebased on 20.04

Planet KDE - Mon, 2020-08-10 12:20

KDE neon is our installable Linux with continuous integration and deployment. It’s based on Ubuntu who had a new Long Term Support Release recently so we’ve rebased it on Ubuntu 20.04 now.

You should see a popup on your install in the next day or so. It’ll ask you to make sure your system is up to date then it’ll upgrade the base to 20.04 which takes a while to download and then another while to install.

Afterwards it should look just the same because it’s the same wonderful Plasma desktop.

Upgrade Instructions

The installable ISOs are also updated and this time they all use the Calamares installer.

Testing and Unstable edition, these are built from the soon-to-be-released Git branches and the untested Git branches. Alas trying the installable ISOs today we found some bugs in the Git Calamares installer so they’re not published yet but the upgrader will still popup on existing installs.

We implemented OEM install mode in Calamares so the other way to get neon is to buy a KDE Slimbook III and it’ll use that.

We also implemented a full disk encryption tickbox in Calamares.

The Docker images are still to be updated and the Snap packages also need moved over.

Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association Board Elections, 2020

Planet Drupal - Mon, 2020-08-10 12:00

It is that time of year again where the Drupal Association Board looks to fill the At-Large member seat that becomes available every year.

This year, we send our thanks to Suzanne Dergecheva, who will be stepping down as At-Large board member after serving her two years. Last year, we elected Leslie Glynn to the board, who has one more year to serve - and we are sure will be happy to welcome the next person onto the board!

Important Dates

Nominations open: 10 August 2020

Nominations close: 27 August 2020

"Meet the Candidates" begins: 28 August 2020

"Meet the Candidates" ends: 13 September 2020

Voting opens: 14 September 2020

Voting closes: 30 September 2020

Announcement of winner: 30 October 2020

What does the Drupal Association Board do?

The Board of Directors of the Drupal Association are responsible for financial oversight and setting the strategic direction for serving the Drupal Association’s mission, which we achieve through Drupal.org and DrupalCon. Our mission is: “Drupal powers the best of the Web. The Drupal Association unites a global open source community to build, secure, and promote Drupal.”

Who can run?

There are no restrictions on who can run, other than you must be a member of the Drupal Association.

How do I run?

Candidates are highly encouraged to:

  1. Watch the latest Community Update Video

  2. Read about the board and elections
  3. Read the Board Member Agreement

Then visit the Election 2020: Dates & Candidates page to self-nominate. The first step is to fill in a form, nominating yourself. Drupal Association staff will create you a candidate page and make you the author so you can continue to add content here during the election and answer any question posed by the electorate as comments on the page.

Who can vote?

For 2020 and moving forward, all individual members of the Drupal Association may vote in the election.

If you are not currently a member, please ensure you have renewed your membership before voting opens, on 14 September.

How do I vote?

The Drupal Association Board Elections are moving to the free and open source Helios Voting service for 2020 and beyond. All Drupal Association individual members will receive their unique voting links via email, sent to the primary email address in their Drupal.org profile, when voting opens. Follow the instructions in that email to vote.

Elected board member special responsibilities

As detailed in a previous blog post, the elected members of the Drupal Association Board have a further responsibility that makes their understanding of issues related to diversity & inclusion even more important; they provide a review panel for our Community Working Group. This is a huge important role in our global community.

What should I do now?

Self-nomination is open! Please do read further:

Then consider if the person who should be standing for election is you. (It probably is!)

Categories: FLOSS Project Planets

QML Online - Qt 5.15, Kirigami, Breeze and more!

Planet KDE - Mon, 2020-08-10 11:55

I'm happy to announce that QML Online is now running with the last version of Qt (5.15) and with an initial Kirigami integration with breeze icons!

There is also a couple of updates for quality of life, like:

  • HTML fixes/corrections
  • Better integration with Firefox
  • New Qt version information label
  • Support for QtQuick.XmlListModel

But sadly, with new features we do have new bugs! As I said before, the Kirigami integration is an initial version, there are some know bugs with it, like:

  • OverlayDrawer has a transparent background (the issue appears to be common in low performance environments)
  • Kirigami version is a bit old (v5.70), newer versions need Qt future feature for QFuture and friends
What is next

I'll be working closer with Kirigami to fix these bugs, and the new feature of multiple instances of QML Online on the same webpage will be in hold for now.

As a reminder, please be free to send Merge Requests, feature requests, opinions and issues.

Thanks

I would like to thank all users of QML Online, and the people that are sending kind works about how it's improving their workflow and how useful the tool is! That really helps to move the project forward.

Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check-in #6

Planet Python - Mon, 2020-08-10 11:24
What did I do this week?

So Rose opened up a pr which will allow tern to set the environment variables. Which really shrinks the size of shell scripts required to get metadata. I also finalized my pr for metadata collection.

What's next?

Waiting for the pr to be approved by mentors :)

Did I get stuck somewhere?

Still stuck with parsing copyrights to obtain license.

Categories: FLOSS Project Planets

Real Python: Pass by Reference in Python: Background and Best Practices

Planet Python - Mon, 2020-08-10 10:00

After gaining some familiarity with Python, you may notice cases in which your functions don’t modify arguments in place as you might expect, especially if you’re familiar with other programming languages. Some languages handle function arguments as references to existing variables, which is known as pass by reference. Other languages handle them as independent values, an approach known as pass by value.

If you’re an intermediate Python programmer who wishes to understand Python’s peculiar way of handling function arguments, then this tutorial is for you. You’ll implement real use cases of pass-by-reference constructs in Python and learn several best practices to avoid pitfalls with your function arguments.

In this tutorial, you’ll learn:

  • What it means to pass by reference and why you’d want to do so
  • How passing by reference differs from both passing by value and Python’s unique approach
  • How function arguments behave in Python
  • How you can use certain mutable types to pass by reference in Python
  • What the best practices are for replicating pass by reference in Python

Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.

Defining Pass by Reference

Before you dive into the technical details of passing by reference, it’s helpful to take a closer look at the term itself by breaking it down into components:

  • Pass means to provide an argument to a function.
  • By reference means that the argument you’re passing to the function is a reference to a variable that already exists in memory rather than an independent copy of that variable.

Since you’re giving the function a reference to an existing variable, all operations performed on this reference will directly affect the variable to which it refers. Let’s look at some examples of how this works in practice.

Below, you’ll see how to pass variables by reference in C#. Note the use of the ref keyword in the highlighted lines:

using System; // Source: // https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/passing-parameters class Program { static void Main(string[] args) { int arg; // Passing by reference. // The value of arg in Main is changed. arg = 4; squareRef(ref arg); Console.WriteLine(arg); // Output: 16 } static void squareRef(ref int refParameter) { refParameter *= refParameter; } }

As you can see, the refParameter of squareRef() must be declared with the ref keyword, and you must also use the keyword when calling the function. Then the argument will be passed in by reference and can be modified in place.

Python has no ref keyword or anything equivalent to it. If you attempt to replicate the above example as closely as possible in Python, then you’ll see different results:

>>>>>> def main(): ... arg = 4 ... square(arg) ... print(arg) ... >>> def square(n): ... n *= n ... >>> main() 4

In this case, the arg variable is not altered in place. It seems that Python treats your supplied argument as a standalone value rather than a reference to an existing variable. Does this mean Python passes arguments by value rather than by reference?

Not quite. Python passes arguments neither by reference nor by value, but by assignment. Below, you’ll quickly explore the details of passing by value and passing by reference before looking more closely at Python’s approach. After that, you’ll walk through some best practices for achieving the equivalent of passing by reference in Python.

Contrasting Pass by Reference and Pass by Value

When you pass function arguments by reference, those arguments are only references to existing values. In contrast, when you pass arguments by value, those arguments become independent copies of the original values.

Let’s revisit the C# example, this time without using the ref keyword. This will cause the program to use the default behavior of passing by value:

using System; // Source: // https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/passing-parameters class Program { static void Main(string[] args) { int arg; // Passing by value. // The value of arg in Main is not changed. arg = 4; squareVal(arg); Console.WriteLine(arg); // Output: 4 } static void squareVal(int valParameter) { valParameter *= valParameter; } }

Here, you can see that squareVal() doesn’t modify the original variable. Rather, valParameter is an independent copy of the original variable arg. While that matches the behavior you would see in Python, remember that Python doesn’t exactly pass by value. Let’s prove it.

Python’s built-in id() returns an integer representing the memory address of the desired object. Using id(), you can verify the following assertions:

  1. Function arguments initially refer to the same address as their original variables.
  2. Reassigning the argument within the function gives it a new address while the original variable remains unmodified.

In the below example, note that the address of x initially matches that of n but changes after reassignment, while the address of n never changes:

>>>>>> def main(): ... n = 9001 ... print(f"Initial address of n: {id(n)}") ... increment(n) ... print(f" Final address of n: {id(n)}") ... >>> def increment(x): ... print(f"Initial address of x: {id(x)}") ... x += 1 ... print(f" Final address of x: {id(x)}") ... >>> main() Initial address of n: 140562586057840 Initial address of x: 140562586057840 Final address of x: 140562586057968 Final address of n: 140562586057840 Read the full article at https://realpython.com/python-pass-by-reference/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Tag1 Consulting: Offline documents with y-indexeddb and Web Workers (part 3)

Planet Drupal - Mon, 2020-08-10 09:59

Slow or intermittent connections are an all-too-common case that many users face when attempting to work with applications. Offline-enabled applications are a particularly challenging use case because they require synchronization and a local understanding of data.

Read more preston Mon, 08/10/2020 - 06:59
Categories: FLOSS Project Planets

Stack Abuse: Translating Strings in Python with TextBlob

Planet Python - Mon, 2020-08-10 08:30
Introduction

Text translation is a difficult computer problem that gets better and easier to solve every year. Big companies like Google are actively working on improving their text translation services which enables the rest of us to use them freely.

Apart from their great personal use, these services can be used by developers through various APIs. This article is about TextBlob which uses one such API to perform text translation.

What is TextBlob?

TextBlob is a text-processing library written in Python. According to its documentation, it can be used for part-of-speech tagging, parsing, sentiment analysis, spelling correction, translation, and more. In this article, we'll focus on text translation.

Internally, TextBlob relies on Google Translate's API. This means that an active internet connection is required for performing translations.

Installing TextBlob

Let's start off by installing TextBlob using pip, and downloading the corpora of words it needs to function:

$ pip install -U textblob $ python -m textblob.download_corpora Using TextBlob

Using TextBlob is straightforward and simple. We just import it, assign a string to the constructor and then translate it via the translate() function:

from textblob import TextBlob blob = TextBlob("Buongiorno!") print(blob.translate(to='en'))

The translate() function accepts two arguments - from_lang and to. The from_lang is automatically set depending on the language TextBlob detects.

The above example uses the Italian phrase Buongiorno which translates to Good morning in English.

Sometimes, we might wish to detect a language to decide if the text needs translation at all. To detect the language of some text, TextBlob's detect_language() function is used:

from textblob import TextBlob blob = TextBlob("Buongiorno!") if (blob.detect_language() != 'en') blob.translate(to='en')) Translation Examples and Accuracy Sentence Translation From English into Hindi

As our first example, we'll see how well English is translated into Hindi:

blob = TextBlob('TextBlob is a great tool for developers') print(blob.translate(to='hi'))

The result is the following:

डेवलपर्स के लिए एक बढ़िया टूल है Translating Russian Poetry into Croatian

Let's see how TextBlob manages poetry. The following is the work of a Russian poet Vladimir Mayakovsky, first in Russian and then in English:

Послушайте! Ведь, если звезды зажигают - значит - это кому-нибудь нужно? Значит - кто-то хочет, чтобы они были? Значит - кто-то называет эти плевочки жемчужиной? Listen! See, if stars light up does it mean that there is someone who needs it? Does it mean that someone wants them to exist? It means that someone calls these little spits magnificent.

We'll feed the original poem in Cyrillic to TextBlob and see how well it translates into Croatian. Since both Russian and Croatian and Slavic languages, the translation is expected to be relatively good:

from textblob import TextBlob poem = 'Послушайте! Ведь, если звезды зажигают - значит - это кому-нибудь нужно? Значит - кто-то хочет, чтобы они были? Значит - кто-то называет эти плевочки жемчужиной?' blob = TextBlob(poem) print(blob.translate(to='hr'))

By running the above code, we get the following output (formatted for convenience):

Slušati! Uostalom, ako su zvijezde upaljene znači li to nekome treba? Dakle - netko želi da to budu? Dakle - netko naziva ove pljuvačke biserom?

Most of the translation is good except for the first word which would sound better if it were Slušajte instead of Slušati, in vocative. Although not perfect, you could understand the translation.

Translating Array of German Words into English

In some cases, we won't have full sentences to translate. We might have a list or an array of words. These are a lot easier to translate, since there's no context to potentially change the translation:

from textblob import TextBlob worter = ['einer', 'zwei', 'drei', 'vier', 'fünf', 'sechs', 'sieben', 'acht', 'neun', 'zehn'] for w in worter: blob = TextBlob(w) print(blob.translate(to='en'))

The result is:

one two three four five six seven eight nine ten

Looks good!

Translating Food from English into French

Finally, let's translate an English word into French:

from textblob import TextBlob blob = TextBlob('An apple') print(blob.translate(to='fr'))

The French translation is Une pomme. Bon Appétit!

Conclusion

Translation is an interesting but difficult computer problem. Deep learning and other AI methods are becoming increasingly good at understanding language and performing automated language translation. TextBlob is one of the tools available to developers that can be used for performing such automated language translations.

There are many benefits to this kind of approach, however, not all translations are perfect. These techniques are still evolving and if you're in the need of a high-quality translation of great importance, it is always best to consult with a professional translator.

For all other purposes, however, tools such as TextBlob are more than enough to provide convenience for simple translation and satisfy the curiosity of developers using them.

Categories: FLOSS Project Planets

IslandT: Repeat repeat and more repeat with Python

Planet Python - Mon, 2020-08-10 08:21

In this article, we are going to revisit CodeWars and solve a simple problem using Python. The problem is as follows.

Write a Python function that will repeat the given string with the number of times that string supposed to be repeated. The solution to this question is as follows.

def repeat_str(repeat, string): repeatString = '' for i in range(0, repeat): repeatString += string return repeatString

As you can see from above, the repeated string will be returned to the caller of this function.

Categories: FLOSS Project Planets

PSF GSoC students blogs: GSoC Week 11: Report.print()

Planet Python - Mon, 2020-08-10 08:13
What I did this week?

This week I was looking what is the best way to provide the users with a printable format. I am working on ReportLab solution. But I also worked on improving and adding some changes to HTML structure.

What is coming up next?

By this week I'll implement a solution for the print problem. Either be it a PDF or a Easy HTML template that is easy to print.

Have I got stuck anywhere?

There are no blocking issues for me at this moment.

Categories: FLOSS Project Planets

OpenSense Labs: Different options of decoupling Drupal

Planet Drupal - Mon, 2020-08-10 08:06
Different options of decoupling Drupal Shalini Rawat Mon, 08/10/2020 - 17:36

In this world of growing interfaces and APIs, content plays a significant role and great user experiences begin with great content. Certainly, wearables, conversational interfaces, IoT, and more have begun to establish the changes in how we experience the internet, making alterations in the digital marketplace. Therefore, to keep up the pace, organizations need to adopt front-end technologies such as AngularJS, React JS, etc. that can deliver the content at a fast speed. Decoupled Drupal (or headless Drupal) is one such solution that has been gaining ground lately and is being considered as a holy-grail that exhibits innovative strength to produce exceptional digital experiences. 


The role of the website is not limited to the creation of the content. In other words, the website is responsible to deliver the content in a user-friendly manner across all devices. Keeping this responsibility in mind, more and more websites are opting for a decoupled approach and preferring a strong content store in Drupal And probably that is why you are here. However, the most complex question remains - how to decouple Drupal. Therefore, in this post, we will be gaining more insight into different ways of implementing decoupled Drupal.

Ways to Decouple Drupal

In a traditional web project, Drupal is used to manage the entire content and to present it. Since Drupal is a monolithic system, therefore it maintains entire control over the presentation as well as data layers. Traditional Drupal has been an excellent choice for editors who wish to take full control over the visual elements on the page. Not to mention, traditional Drupal open door to access features such as in-place editing and layout management. 

There are certain factors that are required to be considered by a technical decision maker before thinking to implement the Decoupled approach. These factors are the implications whether the effort to decouple Drupal is the perfect fit for your organization or not. Therefore, let’s take a look at the factors that should be a part of the decision making process.

  • First things first, do you have separate backend and front-end development resources? 
  • Are you building a native app and want to use Drupal to manage your content, data, users, etc.? 
  • Do you envision publishing content or data across multiple products or platforms?
  • Is interactivity itself a primary concern?
  • Does working around Drupal’s rich feature set and interface offer more work in the long run?
  • Do you want the hottest technology?
  • If your organization complies with the above mentioned factors, the organization is ready for the decoupling process. 

There are two approaches for decoupling drupal depending on the preferences and requirements. These include progressively decoupled, and fully decoupled.

Source: Drupal.orgProgressively Decoupled Drupal

Progressively decoupled Drupal is required when an additional layer called JavaScript is used to deliver a highly interactive end-user experience. In progressively decoupled Drupal, a JavaScript framework is used as a layer on top of the existing Drupal front end. The former is responsible for nothing more than rendering a single block or component on a page or it may render everything within the page body. The progressive decoupling model lies on a spectrum; the less of the page committed to JavaScript, the more editors can take control over the page through Drupal's administrative capabilities.

Fully Decoupled Drupal

Fully decoupled Drupal can be implemented in two ways, namely Fully decoupled app and Fully decoupled static site. 
Fully decoupled Drupal web app involves a complete separation of concerns between the presentation layer and all other aspects of the CMS. In this approach, the CMS becomes a data provider, and a JavaScript application along with server-side rendering is held responsible to render and markup, communicate with Drupal via web service APIs. Despite the unavailability of key functionality like in-place editing and layout management, fully decoupled Drupal app captivates the attention of developers who want greater control over the front end and who are already experienced with building applications in frameworks such as Angular, React, Vue.js, etc. 

On the contrary, JAMstack (JavaScript, APIs, Markup) offers an alternative to the complexities of JavaScript development. It helps in building fully decoupled Drupal static sites. The prime and obvious reason behind the idea is improved performance, security, and reduced complexity for developers.

Front end technologies

In the world of software development, whatever is built around falls into two categories: front-end technology and back-end technology. Front-end technology is everything that is seen by the user and the process that is happening in the background. On the contrary, all the behind-the-scenes activity that is responsible to deliver data and speed to run a screen is considered back-end technology. 

The development of front-end technology is crucial for any business application that wants to succeed and sustain itself in the digital marketplace. It is possible to have the most structured back-end programming to strengthen your application, however, the front-end is what people see and mostly care about.

Therefore, for your consideration, we have rounded up the best front-end technologies that a front end developer can use with Drupal to derive the best outcomes. 


React

React is a JavaScript library that is used to create interactive user interfaces (UIs). It is one of the most powerful and highly used front-end technologies, supported and maintained by the tech giant Facebook. The React holds the capability to split the codes into components to further enable developers with code reusability and fast debugging.  The appellations produced are SEO friendly and highly responsive.
Some of the prominent websites and web applications that use React as front-end technology include Airbnb, Reddit, Facebook, NewYork Times, BBC, etc.

Why connect with Drupal?

  • The combination of React and Drupal can be used to create amazing digital experiences. However, it is quite challenging to know how to leverage the strengths of both react and Drupal.
  • The one-way data flow of React helps in shaping the web page in accordance with the data that is sent from Drupal's RESTful API.

Gatsby

Gatsby is basically an open-source, modern website framework that helps build performance into every site by leveraging the latest web technologies such as React and GraphQL. Moreover, Gatsby is used to creating blazing-fast apps and websites without needing to become a rockstar. It uses powerful pre-configuration to build a website that uses only static files for incredibly fast page loads, service workers, code splitting, etc.
 
Why connect with Drupal?

  • There is no better option than Gatsby to create an enterprise-quality CMS for free, paired with great modern development experience. Not to mention, it offers all the benefits of the so-called JAMstack, like performance, scalability, and security.
  •  Static site generators like Gatsby can do fireworks by pre-generating all the pages of the website, unlike dynamic sites that render pages on-demand, thereby reducing the need for live database querying. As a result, performance is enhanced and overheads are reduced. This enhances the performance and brings down the maintenance cost.

Angular

Angular is an open-source, JavaScript front-end technology. Ever since the advent of this technology in 2009, it has been continuously gaining immense popularity for the advantages that it delivers to the businesses. Angular is supported by a large community and maintained by the tech giant Google. This open-source is readable and constant and enables businesses with various high performing apps.
Paypal, Gmail, and The Guardian are some of the examples that use Angular as the front end technology.

Why connect with Drupal?

  • The powerful combination of Drupal with Angular will allow you to move display logic to the client-side and streamline your backend, thus resulting in a super speedy site.
  • HTML never goes out of date and is always demanded by web developers and designers because of its simplicity, clarity, and intuitivism in its code structure. Angular makes use of HTML to define user interfaces, hence letting the organizations build interactive web applications that are highly functional and hard to break.

Vue

Vue is a JavaScript library that is used for developing distinct web interfaces and to create single-page applications. The core library of Vue solely focuses on the view layer, therefore providing convenient integration with other libraries and tools to achieve the pre-decided or desired outputs. Vue is not just a technology rather it is a proud ecosystem that is easily adaptable as it is lightweight. 

Why connect with Drupal?

  • With Drupal and Vue combination, developers have an upper hand to request and store Drupal content as data objects with the help of the official Vue-Resource plugin.
  • When combined with Vue, Drupal becomes competent to exhibit its magic at the back-end while the compelling features of the Vue handle the client-side. Vue’s component system is one of the powerful features that allow large-scale application building, comprising small and self-contained reusable components.
Decoupled Drupal ecosystem

The world of Decoupled Drupal is a compendium of a myriad of unique modules and features that can help transform the way you retrieve and manipulate information. Amongst which Rest, JSON: API, and GraphQL are the most important as well as the common modules when it comes to decoupled Drupal implementation.  So, let’s take a birds-eye view of each one of them-

RESTful Web Services

Web Services holds the responsibility to allow other applications to read and update information on your site via the Web. REST is one of the ways of making Web Services available on your site. Unlike other techniques, it encourages developers to rely on HTTP methods (such as GET and POST) to operate on resources (data managed by Drupal). RESTful web services is a module that provides a customizable, extensible RESTful API of data managed by Drupal. The module enables you to create an interaction with any content entity (nodes, users, comments) as well as watchdog database log entries. 

JSON: API

JSON: API is designed with an intent to minimize the number of requests as well as the amount of data that is transmitted between clients and servers. And guess what, this efficiency comes without any compromise relating to readability, flexibility, or discoverability. The moment you enable the JSON: API module is the moment you immediately gain a full REST API for every type in your Drupal application. JSON: API works on the principle that the module should be production-ready ‘’out of the box". To clarify, the module is highly inflexible in nature, wherein everything is pre-fixed. Be it about the location where the resources will reside, or what methods are immediately available on them. JSON: API leaves access control to Drupal Core's permissions system. 

GraphQL

The GraphQL Drupal module is used to raise a query or mutate (update/delete) any content or configuration using the official GraphQL query language. This particular module is considered as an extremely powerful tool which opens the door for Drupal to be used in a multitude of applications. Also, plays a crucial role in tools like GraphQL to implement auto-completion. This module can be used as a foundation for building your own schema through custom code or you can use and extend the generated schema using the plugin architecture and the provided plugin implementations form the sub-module.

In addition to these, the shift of responsibility has given rise to the development of some other modules to better serve the content and data. Let’s take a glance at them.

  • You can use Webform REST module to retrieve and submit web forms via REST.
  • If you wish to extend core's REST Export views display to automatically convert any JSON string field to JSON in the output, REST Export Nested module is the best option. 
  • Overriding the defaults that are preconfigured upon the installation of JSON: API module sounds easy with the JSON: API Extras.
  • For easy ingestion of content by other applications, Lightning API provides a standard API that primarily makes use of the json:api and OAuth2 standards via the JSON: API and Simple Oauth modules.
  • To build an integration between GraphQL and Search API modules, GraphQL Search API module can be useful.
  • Subrequest module tells the system to execute several requests in a single bootstrap and then return all the things.
  • Contenta JS module is necessary for Contenta.js to function properly.
  • There is an OpenAPI module in Drupal that can integrate well with both core REST and JSON: API for documentation of available entity routes in those services. 
  • Schemata module that provides schemas can be used for facilitating generated documentation and generated code.

Check out more such modules in the decoupled Drupal ecosystem here

Conclusion 

To conclude, decouple Drupal is an interesting approach that can help you build feature-rich interactive websites or build content hubs. There is a lot of talk about headless Drupal in the market and there is no doubt why companies are going gaga over it. In today’s world, end-users look forward to highly interactive websites that can pop out results in a jiff. Moreover, content needs to be made available at all touch-points in harmony. The decoupled Drupal solves these problems by creating different layers for presentation and data.

However, it is equally important to dig in certain minute downsides to it as well and carefully consider them before taking the decoupled path.

Got a question? Feel free to ping us at hello@opensenselabs.com and our experts will help you embark on your Drupal project.

blog banner blog image Drupal Decoupled Drupal Progressively decoupled Drupal Fully Decoupled Drupal JSON API GraphQL REST API RESTful Web Services OpenAPI Contentajs React Vue Gatsby Angular JAMstack Blog Type Articles Is it a good read ? On
Categories: FLOSS Project Planets

PSF GSoC students blogs: Weekly Check In #6

Planet Python - Mon, 2020-08-10 06:15

Hello all!

Finally the last stage of GSoC is here. I think a most viable product is ready leaving out a few bugs which are being worked on right now.

The last week saw a lot of work and several PRs got merged. The major highlight was a feature modification in the New Story page. I had created a search dropdown on this page. What it basically did was, when a user tried to add a title for his new story, the existing stories were searched using this text as keywords and the relevant results were shown. Now, I got a new design for the way these results were supposed to be shown whose implementation was a little tricky because to achieve the desired result I had to make changes to elements in the DOM based on change in screen size. It took some time but I was able to achieve the required functionality. I employed useLayoutEffect React hook to add the relevant event listener. Apart from this I worked on resolving several bugs and added minor components in existing UIs. One such fix was on the My Stories and User Profile page where the stories were not getting displayed because the StoriesList component (that is used for displaying the stories) was updated to require more input parameters.

I did not face any issues this week and as the deadline approaches I'll be working on incorporating the final UI that has been given to me.

Categories: FLOSS Project Planets

Talk Python to Me: #277 10 tips every Django developer should know

Planet Python - Mon, 2020-08-10 04:00
We recently covered 10 tips that every Flask developer should know. But we left out a pretty big group in the Python web space: Django developers! And this one is for you. I invited Bob Belderbos, who's been running his SaaS business on Python and Django for several years now, to share his tips and tricks. <br/> <h2>The 10 tips</h2><ol> <li>Django Admin</li><li>ORM magic</li><li>Models</li><li>Debugging/Performance Toolbar</li><li>Extending the User model</li><li>Class based views (CBVs)</li><li>manage.py</li><li>Write your own middleware</li><li>Config variable management with python-decouple and dj-database-url</li><li>Built-in template tags and filters</li></ol><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Bob on Twitter</b>: <a href="https://twitter.com/bbelderbos" target="_blank" rel="noopener">@bbelderbos</a><br/> <b>Code Challenges Platform</b>: <a href="https://codechalleng.es/" target="_blank" rel="noopener">codechalleng.es</a><br/> <b>PyBites</b>: <a href="https://pybit.es/" target="_blank" rel="noopener">pybit.es</a><br/> <br/> <b>Django admin</b>: <a href="https://docs.djangoproject.com/en/3.0/ref/contrib/admin/" target="_blank" rel="noopener">docs.djangoproject.com</a><br/> <b>Django admin cookbook</b>: <a href="https://books.agiliq.com/projects/django-admin-cookbook/en/latest/" target="_blank" rel="noopener">books.agiliq.com</a><br/> <b>Use some Django ORM magic to get the most common first names</b>: <a href="https://twitter.com/pybites/status/1181912492701822976" target="_blank" rel="noopener">twitter.com/pybites</a><br/> <b>Django custom manager</b>: <a href="https://riptutorial.com/django/example/22038/define-custom-managers" target="_blank" rel="noopener">riptutorial.com</a><br/> <b>Debug toolbar</b>: <a href="https://django-debug-toolbar.readthedocs.io/en/latest/index.html" target="_blank" rel="noopener">django-debug-toolbar.readthedocs.io</a><br/> <b>select_related</b>: <a href="https://docs.djangoproject.com/en/3.0/ref/models/querysets/#django.db.models.query.QuerySet.select_related" target="_blank" rel="noopener">docs.djangoproject.com</a><br/> <b>Extending the user model / working with signals / @receiver</b>: <a href="https://simpleisbetterthancomplex.com/tutorial/2016/07/22/how-to-extend-django-user-model.html" target="_blank" rel="noopener">simpleisbetterthancomplex.com</a><br/> <b>Class-based views</b>: <a href="https://docs.djangoproject.com/en/3.0/topics/class-based-views/" target="_blank" rel="noopener">docs.djangoproject.com</a><br/> <b>Comparing class and function-based views</b>: <a href="https://github.com/talkpython/100daysofweb-with-python-course/blob/master/days/045-048-django-intro/demo/quotes/views.py" target="_blank" rel="noopener">github.com/talkpython/100daysofweb</a><br/> <b>Example of class-based views</b>: <a href="https://github.com/talkpython/100daysofweb-with-python-course/blob/master/days/045-048-django-intro/demo/quotes/views-cb.py" target="_blank" rel="noopener">github.com/talkpython/100daysofweb</a><br/> <b>Django command template</b>: <a href="https://gist.github.com/bbelderbos/c69c057aab07440c9a485e3e9c9ad248" target="_blank" rel="noopener">gist.github.com</a><br/> <b>Django middleware example</b>: <a href="https://gist.github.com/bbelderbos/0bcb04e0b7a89a0cb108d331ea75f8e1" target="_blank" rel="noopener">gist.github.com</a><br/> <br/> <b>Config settings management:</b><br/> <b>python-decouple</b>: <a href="https://pypi.org/project/python-decouple/" target="_blank" rel="noopener">pypi.org</a><br/> <b>dj-database-url</b>: <a href="https://pypi.org/project/dj-database-url/" target="_blank" rel="noopener">pypi.org</a><br/> <br/> <b>Useful template tags and filters</b>: <a href="https://docs.djangoproject.com/en/3.0/ref/templates/builtins/" target="_blank" rel="noopener">docs.djangoproject.com</a><br/> <br/> <b>for-empty</b>: <a href="https://gist.github.com/bbelderbos/b54040cc288843cf94bd4a90f50e967f" target="_blank" rel="noopener">gist.github.com</a><br/> <b>is_new filter example</b>: <a href="https://gist.github.com/bbelderbos/41aa5690bd76510a439e0a7a5fee7fcd" target="_blank" rel="noopener">gist.github.com</a><br/> <b>Asynchronous Tasks with Django and Celery</b>: <a href="https://testdriven.io/blog/django-and-celery/" target="_blank" rel="noopener">testdriven.io</a><br/> <b>Celery debugging - CELERY_ALWAYS_EAGER</b>: <a href="https://twitter.com/pybites/status/1279432833518444544" target="_blank" rel="noopener">twitter.com/pybites</a><br/> <b>secure.py</b>: <a href="https://github.com/TypeError/secure.py" target="_blank" rel="noopener">github.com/TypeError/secure.py</a><br/> <b>django-tinymce</b>: <a href="https://github.com/aljosa/django-tinymce" target="_blank" rel="noopener">github.com/aljosa</a><br/> <br/> <b>Extra tools Michael mentioned</b><br/> <b>BeeKeeper Studio</b>: <a href="https://www.beekeeperstudio.io" target="_blank" rel="noopener">beekeeperstudio.io</a><br/> <b>SimpleMDE</b>: <a href="https://simplemde.com/" target="_blank" rel="noopener">simplemde.com</a><br/> <b>Human time to Python parse string site (the one I forgot)</b>: <a href="https://pystrftime.com" target="_blank" rel="noopener">pystrftime.com</a><br/></div><br/> <strong>Sponsors</strong><br/> <br/> <a href='https://talkpython.fm/linode'>Linode</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Julia Signell

Planet Python - Mon, 2020-08-10 01:05

This week we welcome Julia Signell (@JSignell) as our PyDev of the Week! She helps develop Holoviz, a browser-based data visualization open source package for Python and Conda. Julia is also a co-organizer for PyDataPHL.

Let’s spend some time getting to know Julia better!

Can you tell us a little about yourself (hobbies, education, etc):

I am a software developer at Saturn Cloud and live in West Philadelphia. Growing up I always liked languages, logic, magic, and maps. I studied environmental engineering at Smith College focussing on hydrology and after that spent a while in some different hydrology labs before fully migrating to software development. Once I made the transition, I did a few different jobs at Anaconda including a stint on the Holoviz team (which was great!) and then started at Saturn Cloud last fall. In terms of hobbies, I try to be outside as much as possible, so I like tennis, skiing, basketball, hiking, kayaking, camping, fishing, gardening. Other than that I tend to go for homesteader activities like knitting, quilting, and baking. My dream is to own a sheep or two and spin my own wool.

Why did you start using Python?

I started using Python a few different times before it really stuck. One time I was making graphs of gas emissions over time in salt marshes. Another time I was trying to manage incoming data from an ecohydrology field station in rural Kenya and automate storage and access to that data. I started getting serious about data visualization when I was trying to analyze lightning data and make sense of the locations of lightning strikes relative to rainstorms.

What other programming languages do you know and which is your favorite?

I only really work in Python and Javascript (well typescript). So Python is my favorite. But I do really like javascript’s ternary operator.

What projects are you working on now?

in my current role at Saturn Cloud, I’m mostly working in the Dask ecosystem, and in particular, I am starting work on a new version of dask-geopandas. Eventually, I’d like to get back to this idea that started kicking around a year or so ago about a specification for python visualization libraries. The idea is that if libraries could comply with a certain spec, it would make it easier for users to switch back and forth between different libraries and it’d make it easier for new tools to build off the generic API. Kind of like the numpy array protocol.

Which Python libraries are your favorite (core or 3rd party)?

Oh. I have to say xarray. Labels on multidimensional arrays? It’s just such a good idea!

What do you like most about Holoviz?

Holoviz has a lot of ideas about APIs for creating visualizations, but doesn’t take responsibility for the rendering. I like that idea – building on what exists while imagining different ways of interacting with them.

How did you get into open source and what do you like about it?
I first contributed to open source at the Bokeh sprint at the SciPy conference. The atmosphere was really collaborative and fun. This is probably what everybody says, but when I think of what I like most about open source it has to be the community. By working on open source, I get to work with people from all over the place and keep working with them even as their jobs change. It’s a great place to learn and make friends.

Thanks for doing the interview, Julia!

The post PyDev of the Week: Julia Signell appeared first on The Mouse Vs. The Python.

Categories: FLOSS Project Planets

Pages