Feeds

Evolving Web: Where’s Your Head? The Case For (and Against) Headless CMS

Planet Drupal - Thu, 2022-08-25 12:24

Headless content management systems – they’re all the rage. With a market growing at a rate of over 22 percent per year,  according to ReportLinker, headless CMS is increasingly the format of choice for e-commerce companies, news organizations and others who deal with a revolving door of content delivered across multiple platforms.

So what, then, is a headless CMS? 

In short, it’s a back-end-only content management system that is not coupled with any front-end presentation layer or “head”. Also referred to as a decoupled CMS, a headless CMS exists primarily as a content repository, transmitting content via an application programming interface (API) to whatever channels the content is aimed at.

Since the birth of the Internet, the vast majority of CMSs have been of the “coupled” variety (the kind most readers will be familiar with), wherein content is uploaded to a back-end and is then automatically transmitted to a pre-built front-end delivery layer. This format has obvious advantages, namely its simplicity – anyone with basic CMS training is able to create and update a website’s content and the front-end is standardized, making it easier to build your website quickly.

However, with the advent of mobile apps, chatbots, the Internet of Things (IoT) and other innovations, many companies and organizations now expect the information on their CMS to be able to do a lot more than just exist on a website. For many, the page layout systems offered by traditional CMSs are a constraint to developers and marketers and can impede the adoption of new third-party technologies.

The case for decoupled architecture

In a headless CMS, front-end developers have free rein to build as many “heads” as they want for as many channels as they want to deliver content to. While this may require more IT expertise on the client end, due to it being harder to preview content, there are numerous advantages to this structure.

From a scalability standpoint, the advantages of a headless CMS are self-evident. In a traditional coupled system, CMS code is tightly connected to a particular set of templates, which means that expanding to new platforms such as native apps or digital signage involves a great deal of customization and installations on the part of developers. By contrast, with a headless CMS, there are no limits to the platforms that can be added.

Speed and Scalability

In the case of a rapidly expanding business, a traditional CMS places constraints on a website’s capacity to expand. Adding discussion forums, AI chatbots, and other add-ons all mean additional server requests, which in a traditional CMS will slow down the performance of the website while creating a messy patchwork of plugins. A headless CMS, by contrast, means you can offer increasingly diverse services without sacrificing speed and performance.

For companies and organizations that maintain different versions of a website for various languages, locations or franchises, a headless CMS can mean less stress over version control. Because all the content is stored in a single location, the headless format makes it much easier to keep track of what changes have been made where and when and allows you to push revisions out to multiple sites simultaneously.

If you’re a prominent news organization and you’re likely to have articles and videos going viral, you definitely want a decoupled CMS. With a traditional CMS, any sudden burst of traffic is liable to overwhelm the server and paralyze your site – unless you pay for hosting that will accommodate enormous spikes in bandwidth use. This is a non-issue with a headless CMS, where heavy traffic on the front end doesn’t impact the back end at all.

For retail websites, speed truly matters – sites that load in one second have an e-commerce conversion rate 2.5x higher than sites that take five seconds to load, according to Portent – and sites underpinned by headless CMSs are faster across the board. With coupled CMSs, loading a page requires the server to make dozens of automated requests, although advanced caching can solve many of these challenges when in the hands of an experienced team. A single request for a hand-coded HTML website linked to a headless CMS, however, will load in less than a second.

Read more - Building Decoupled Drupal - Part 1: Choosing Your Application

Enhanced security 

Headless CMSs also offer greater security. Sites with traditional CMSs are much more vulnerable to distributed denial-of-service (DDoS) attacks, in which hackers attempt to overwhelm a site by flooding it with superfluous requests – which with a headless CMS translates to a single request. Also, because the API publishes headless content as read-only, it can be protected by multiple layers of code, lessening the risk of attacks.

The case against

While a headless CMS offers great advantages in terms of scalability, stability and security, it also presents challenges and drawbacks and is certainly not suited to all organizations. A traditional CMS with its out-of-the-box front-end templates is advantageous for organizations that lack the internal IT capacity to develop front-end applications or the budget to hire outside developers to develop and maintain a custom solution.

A headless CMS is, by its nature, more complex and more expensive than a traditional CMS, requiring an upfront cost for the CMS, the development team, and the necessary infrastructure to run your website, app and whatever other tools are using the CMS. Headless CMSs also come with formatting challenges, as they do not necessarily allow you to preview what content will look like on the page, requiring extra steps.

For organizations whose websites are largely informational and static, and are unlikely to be suddenly overwhelmed with high demand, a headless CMS is not optimal. With a traditional CMS, the front-end is standardized, meaning that you benefit from existing templates for standard features like the user login page, search results page, etc. You also benefit from the accessibility and SEO work that has been done to optimize the standard templates.

When you build a custom front-end, on the other hand, you take full responsibility for all of these aspects of your website.

The best of both worlds?

For those looking for the scalability and stability of a headless CMS while still benefiting from the out-of-the-box front-end templates offered by conventional systems, a middle ground exists in the form of a hybrid or “progressive decoupled” CMS. As a decoupled system with a content-as-a-service (CaaS) API, a hybrid CMS enables content delivery across multiple channels while also offering marketers the types of interfaces they’re used to from conventional CMSs.

There are some disadvantages here compared to a fully headless CMS. It may be more difficult to take advantage of microservices and omnichannel delivery with a hybrid system versus a fully headless one, and it does not offer the same flexibility when it comes to publishing dynamic content on a multitude of platforms. It can also be more complex to maintain than a fully headless CMS due to the presence of both front-end and back-end code.

Nevertheless, for many organizations looking for high-performance sites with maximum speed and security while avoiding the hassle and cost of from-scratch front-end design, a hybrid CMS might be the perfect solution. You can seen an example of this in action with the work we did for Princeton International, whose curriculum planning tools were built with decoupled components.

Train your team - Decoupled Drupal with Gatsby

Headless Drupal

Drupal has long been associated with its flexible front-end theming system its content publishing features. This might lead one to think that Drupal isn’t the best choice for a headless CMS. However, since Drupal 7, it has been an API-first CMS that provides multiple ways for external systems to integrate with it and is in fact very well suited to a decoupled structure.

When Drupal 8 was released, the ability to function as a decoupled CMS was built into the core with the introduction of a RESTful web services API. This type of API, based on the representational state transfer (REST) architectural style, is ideally suited to building a decoupled site or integrating with an iOS or Android app or other web services.

Of course, some of Drupal’s much-vaunted front-end functionality (i.e. templates) is lost or inhibited in a headless setup, which will generally make it much harder to “preview” web content prior to publishing (previewing would require a custom preview workflow, like that provided by Gatsby). However, a hybrid setup will enable you to fully maximize the benefits of using Drupal while still benefiting from all of the advantages of a decoupled CMS.

Is a headless approach right for you?

When considering opting for a headless CMS, it’s important to consider how much time and energy (and budget) you have to commit to your web services. If you have the internal capacity for dedicated front-end development or the budget for ongoing external support in this area, a fully headless approach might well be right for you, especially if you’re dealing with dynamic content across multiple channels.

Conversely, if you’re not a developer and don’t have one on your team, and it’s important for you to be able to preview content prior to publishing, a headless CMS is probably not a good option. With that said, if you deal with potentially viral content and are looking to expand across multiple platforms, but still want to be able to preview content and take advantage of Drupal’s many out-of-the-box features, a hybrid solution might be a good option for your organization. 

//-->

+ more awesome articles by Evolving Web
Categories: FLOSS Project Planets

Do you use Kdenlive? Share your project and behind the scenes footage and stills!

Planet KDE - Thu, 2022-08-25 11:35

Kdenlive will be carrying out a fundraiser soon and we would like to explain its place in the moviemaking ecosystem through the experience of the filmmakers that use it. That is you.

We would like to hear about your projects. If you have parts of your work you can share with us, behind the scenes stills or footage on set, shots of your team working on post-production using Kdenlive, and so on, we would love to see them.

If you would like to contribute material we can use in our campaign, We have enabled an upload folder on KDE’s servers. Upload your images and clips there and include a text file along with the media containing attributions so we can credit you. Include links to the finished project so we can give your work some exposure, and, VERY IMPORTANT, a copyleft (CC By or CC By-SA) or Public Domain-like license so we can legally share the material.

And please remember: you can only legally share material you have full rights too!

Thank you!

Upload files

The post Do you use Kdenlive? Share your project and behind the scenes footage and stills! appeared first on Kdenlive.

Categories: FLOSS Project Planets

Stack Abuse: Object Detection Inference in Python with YOLOv5 and PyTorch

Planet Python - Thu, 2022-08-25 06:30
Introduction

Object detection is a large field in computer vision, and one of the more important applications of computer vision "in the wild". On one end, it can be used to build autonomous systems that navigate agents through environments - be it robots performing tasks or self-driving cars, but this requires intersection with other fields. However, anomaly detection (such as defective products on a line), locating objects within images, facial detection and various other applications of object detection can be done without intersecting other fields.

Advice This short guide is based on a small part of a much larger lesson on object detection belonging to our "Practical Deep Learning for Computer Vision with Python" course.

Object detection isn't as standardized as image classification, mainly because most of the new developments are typically done by individual researchers, maintainers and developers, rather than large libraries and frameworks. It's difficult to package the necessary utility scripts in a framework like TensorFlow or PyTorch and maintain the API guidelines that guided the development so far.

This makes object detection somewhat more complex, typically more verbose (but not always), and less approachable than image classification. One of the major benefits of being in an ecosystem is that it provides you with a way to not search for useful information on good practices, tools and approaches to use. With object detection - most people have to do way more research on the landscape of the field to get a good grip.

Fortunately for the masses - Ultralytics has developed a simple, very powerful and beautiful object detection API around their YOLOv5 implementation.

In this short guide, we'll be performing Object Detection in Python, with YOLOv5 built by Ultralytics in PyTorch, using a set of pre-trained weights trained on MS COCO.

YOLOv5

YOLO (You Only Look Once) is a methodology, as well as family of models built for object detection. Since the inception in 2015, YOLOv1, YOLOv2 (YOLO9000) and YOLOv3 have been proposed by the same author(s) - and the deep learning community continued with open-sourced advancements in the continuing years.

Ultralytics' YOLOv5 is the first large-scale implementation of YOLO in PyTorch, which made it more accessible than ever before, but the main reason YOLOv5 has gained such a foothold is also the beautifully simple and powerful API built around it. The project abstracts away the unnecessary details, while allowing customizability, practically all usable export formats, and employs amazing practices that make the entire project both efficient and as optimal as it can be. Truly, it's an example of the beauty of open source software implementation, and how it powers the world we live in.

The project provides pre-trained weights on MS COCO, a staple dataset on objects in context, which can be used to both benchmark and build general object detection systems - but most importantly, can be used to transfer general knowledge of objects in context to custom datasets.

Advice: If you'd like to learn more about the YOLO method, as well as competitive methods such as SSDs (Single-Shot Detectors) and the two-stage detector camp including Faster R-CNN and retina net - our course lesson on "Object Detection and Segmentation - R-CNNs, RetinaNet, SSD, YOLO"!

Object Detection with YOLOv5

Before moving forward, make sure you have torch and torchvision installed:

! python -m pip install torch torchvision

YOLOv5's got detailed, no-nonsense documentation and a beautifully simple API, as shown on the repo itself, and in the following example:

import torch # Loading in yolov5s - you can switch to larger models such as yolov5m or yolov5l, or smaller such as yolov5n model = torch.hub.load('ultralytics/yolov5', 'yolov5s') img = 'https://i.ytimg.com/vi/q71MCWAEfL8/maxresdefault.jpg' # or file, Path, PIL, OpenCV, numpy, list results = model(img) fig, ax = plt.subplots(figsize=(16, 12)) ax.imshow(results.render()[0]) plt.show()

The second argument of the hub.load() method specifies the weights we'd like to use. By choosing anywhere between yolov5n to yolov5l6 - we're loading in the MS COCO pre-trained weights. For custom models:

model = torch.hub.load('ultralytics/yolov5', 'custom', path='path_to_weights.pt')

In any case - once you pass the input through the model, the returned object includes helpful methods to interpret the results, and we've chosen to render() them, which returns a NumPy array that we can chuck into an imshow() call. This results in a nicely formatted:

Saving Results as Files

You can save the results of the inference as a file, using the results.save() method:

results.save(save_dir='results')

This will create a new directory if it isn't already present, and save the same image we've just plotted as a file.

Cropping Out Objects

You can also decide to crop out the detected objects as individual files. In our case, for every label detected, a number of images can be extracted. This is easily achieved via the results.crop() method, which rcreates a runs/detect/ directory, with expN/crops (where N increases for each run), in which a directory with cropped images is made for each label:

results.crop() Saved 1 image to runs/detect/exp2 Saved results to runs/detect/exp2 [{'box': [tensor(295.09409), tensor(277.03699), tensor(514.16113), tensor(494.83691)], 'conf': tensor(0.25112), 'cls': tensor(0.), 'label': 'person 0.25', 'im': array([[[167, 186, 165], [174, 184, 167], [173, 184, 164],

You can also verify the output file structure with:

! ls runs/detect/exp2/crops # crops maxresdefault.jpg ! ls runs/detect/exp2/crops # backpack bus car handbag person 'traffic light' umbrella Object Counting

By default, when you perform detection or print the results object - you'll gget the number of images that inference was performed on for that results object (YOLOv5 works with batches of images as well), its resolution and the count of each label detected:

print(results)

This results in:

image 1/1: 720x1280 14 persons, 1 car, 3 buss, 6 traffic lights, 1 backpack, 1 umbrella, 1 handbag Speed: 35.0ms pre-process, 256.2ms inference, 0.7ms NMS per image at shape (1, 3, 384, 640) Inference with Scripts

Alternatively, you can run the detection script, detect.py, by cloning the YOLOv5 repository:

$ git clone https://github.com/ultralytics/yolov5 $ cd yolov5 $ pip install -r requirements.txt

And then running:

$ python detect.py --source img.jpg

Alternatively, you can provide a URL, video file, path to a directory with multiple files, a glob in a path to only match for certain files, a YouTube link or any other HTTP stream. The results are saved into the runs/detect directory.

Going Further - Practical Deep Learning for Computer Vision

Your inquisitive nature makes you want to go further? We recommend checking out our Course: "Practical Deep Learning for Computer Vision with Python".

Another Computer Vision Course?

We won't be doing classification of MNIST digits or MNIST fashion. They served their part a long time ago. Too many learning resources are focusing on basic datasets and basic architectures before letting advanced black-box architectures shoulder the burden of performance.

We want to focus on demystification, practicality, understanding, intuition and real projects. Want to learn how you can make a difference? We'll take you on a ride from the way our brains process images to writing a research-grade deep learning classifier for breast cancer to deep learning networks that "hallucinate", teaching you the principles and theory through practical work, equipping you with the know-how and tools to become an expert at applying deep learning to solve computer vision.

What's inside?
  • The first principles of vision and how computers can be taught to "see"
  • Different tasks and applications of computer vision
  • The tools of the trade that will make your work easier
  • Finding, creating and utilizing datasets for computer vision
  • The theory and application of Convolutional Neural Networks
  • Handling domain shift, co-occurrence, and other biases in datasets
  • Transfer Learning and utilizing others' training time and computational resources for your benefit
  • Building and training a state-of-the-art breast cancer classifier
  • How to apply a healthy dose of skepticism to mainstream ideas and understand the implications of widely adopted techniques
  • Visualizing a ConvNet's "concept space" using t-SNE and PCA
  • Case studies of how companies use computer vision techniques to achieve better results
  • Proper model evaluation, latent space visualization and identifying the model's attention
  • Performing domain research, processing your own datasets and establishing model tests
  • Cutting-edge architectures, the progression of ideas, what makes them unique and how to implement them
  • KerasCV - a WIP library for creating state of the art pipelines and models
  • How to parse and read papers and implement them yourself
  • Selecting models depending on your application
  • Creating an end-to-end machine learning pipeline
  • Landscape and intuition on object detection with Faster R-CNNs, RetinaNets, SSDs and YOLO
  • Instance and semantic segmentation
  • Real-Time Object Recognition with YOLOv5
  • Training YOLOv5 Object Detectors
  • Working with Transformers using KerasNLP (industry-strength WIP library)
  • Integrating Transformers with ConvNets to generate captions of images
  • DeepDream
Conclusion

In this short guide, we've taken a look at how you can perform object detection with YOLOv5 built using PyTorch.

Categories: FLOSS Project Planets

Jonathan Dowland: IKEA HEMNES Shoe cabinet repair

Planet Debian - Thu, 2022-08-25 06:16

Over time the screw hole into the wooden front section of our IKEA HEMNES Shoe cabinet had worn out and it was not possible to secure a screw at that position any more. I designed a little 'wedge' of plastic to sit over the fitting with and provide some offset screw holes.

At the time, I had a very narrow window of access to our office 3d printer, so I designed it almost as a "speed coding" session in OpenSCAD: in between 5-10 minutes, guessing about the exact dimensions for the plastic bit that it sits over.

Categories: FLOSS Project Planets

Plasma 5.25 for Jammy 22.04 available via PPA

Planet KDE - Thu, 2022-08-25 05:45

We have had many requests to make Plasma 5.25 available in our backports PPA for Jammy Jellyfish 22.04.

Providing backports of new Plasma versions to a LTS release always must be an ‘opt-in’ process, however we are aware that many of our users now are accustomed to adding our backports PPA as a matter of course, so for a LTS release some additional caution is required in what we make available there.

Therefore, at least for the time being, Plasma 5.25 will be available via an additional ‘backports-extra’ PPA, so users can make a positive informed choice to upgrade.

This PPA is intended to be used in combination with our standard backports PPA, but should also work standalone.

As usual with our PPAs, there is the caveat that the PPA may receive additional updates and new releases of KDE Plasma, Gear (Apps) and Frameworks, plus other apps and required libraries. Users should always review proposed updates to decide whether they wish to receive them.

To add the PPA and upgrade, do:

sudo add-apt-repository ppa:kubuntu-ppa/backports-extra && sudo apt full-upgrade -y

We hope keen adopters enjoy using Plasma 5.25

Categories: FLOSS Project Planets

The Future of KDAB CI

Planet KDE - Thu, 2022-08-25 05:00

For years, we at KDAB have been using Buildbot as our build and continuous integration system. Gerrit hosts all our projects and is our code review platform. Our deployment of Buildbot and build machines has naturally grown over the years. It builds hundreds of configurations and up to a thousand builds daily, but issues with reliability and quality of service called for a major restructuring. Over the past year, we gradually developed and migrated to new infrastructure and, once that was in place, we were finally able to add some long-awaited features.

Buildbot at KDAB

Buildbot is a continuous integration system written in Python. It offers a high degree of flexibility, because the build configuration is fully written in Python. Thanks to an extensive homegrown library of functions and features, we need only a few lines of code to build, test and package a new C++ and Qt/QML application on Linux, Windows, Mac as well as Android, iOS and other platforms. Many of our projects build against multiple different compilers and Qt versions per platform. Sometimes application bundles need to be signed and notarized (looking at you, Apple). Once the build is finished, the apps are offered for download from our servers and developer app stores like Visual Studio App Center. Developers are notified about build failures per email, our chat system and, of course, in our code review tool Gerrit.

System Architecture

From a system architecture point of view, Buildbot follows a master/worker paradigm: Workers are virtual machines, bare metal servers or docker containers with build environments. Each worker runs a Buildbot process which receives and executes build commands. A central master keeps track of changes in Gerrit, build configurations, workers, builds and build results. In large deployments like ours, the master is made up of multiple Buildbot processes with different responsibilities: One process to serve the web interface, and another to coordinate builds. They rely on a variety of additional services: MariaDB for data storage, Apache2 to manage user authentication and Crossbar.io to coordinate messages between the master’s Buildbot processes. The Buildbot master centrally controls how builds are run on the workers and issues every build command individually to the respective worker.

This setup enabled, among others, the following features:

  • build 100s of projects, configurations and builds daily, 
  • monitor 100s of Git repositories for changes, hosted on our internal Gerrit, on GitHub and other platforms
  • report build results via email, into our chat system, to Gerrit and GitHub – accessible to customers and KDABians alike
  • web interface to provide insight into builds (for KDABians only) and build artifacts

Over the past years, the number of projects and daily builds has grown steadily. The speed of builds and of the web interface degraded correspondingly. The master processes and all accompanying services were hosted on a single VM with a networked file system – and the file system was quickly identified as main bottleneck. Unfortunately, the traditional approach to deployment hampered our efforts to move the system to bare metal hardware with decent file system speed: All dependencies, the services and Buildbot itself were installed directly on the system using the system package manager and Python’s pip. Buildbot configuration relied heavily on hard-coded values, including IP addresses and file system paths. For every new instance we would have to recreate the setup step by step, a very laborious process that would have left us with a setup just as inflexible as the old one.

Modern Buildbot at KDAB

Therefore, we decided to encapsulate Buildbot and all services in Docker containers and use Docker Compose to describe the whole stack. To that end, we undertook the following steps:

  • refactor the Buildbot configuration so that the configuration supports multiple independent instances of the Buildbot master:
    • read instance-specific parameters like URL, website name, address of back-end services, etc. from environment variables
    • allow to read different Python scripts to describe builds and build workers, configured via environment variables
  • create Docker images
    • for Buildbot, including all dependencies and our custom patches
    • for Apache2
    • for an SSH server to receive artifact uploads from workers
    • a monitoring solution (Telegraf) to collect performance data from the services
    • and more
  • create a Docker Compose configuration to describe the services and their interactions

The Docker Compose configuration is instance-independent: In order to create a new instance of Buildbot master, with its own set of workers, build configuration, URL, name and so forth, we simply copy the Docker Compose configuration to the host system and create an environment file which describes the instance. Now, setting up a new Buildbot instance.

Quality of Service Improved

Provided with these new tools, we directly created two new Buildbot instances: One to replace the old instance for KDAB-internal and customer projects, and another to build projects of one of our larger customers. For this customer we already provided code hosting and continuous integration services, but their developers could not directly access Buildbot’s web interface with details on the builds, due to limitations in Buildbot’s access management. The new separate instance removes this obstacle, and our customer now has full access to builds.

The transition brought about many other improvements to our customers, developers and system administrators:

  • Build speed improved drastically: The time from creation of a commit on Gerrit to the beginning of the respective builds lowered from typically more than 5 minutes to less than a second. The overall build time often sank by more than 50 %.
  • The system became more reliable with less crashes and freezes.
  • Time to update build configuration on the fly, e.g. to add a new project, decreased from minutes to a few seconds.
  • Special configuration of Apache2 allows to easily brand the Buildbot web interface not only by changing the instance’s name but also by changing colors, a feature which Buildbot does not offer natively.
  • We can offer dedicated Buildbot instances to customers so that they have direct access to build results.
Build Results in Gerrit

For years platforms like GitHub present build results in the web interface and have build failures block merges. Gerrit has only offered rudimentary support for this. Buildbot could block submission of patches. It could also create comments to inform about build errors. But the presentation in Gerrit was cluttered and often confusing, a workaround.

Luckily, Gerrit 3.4 introduced the Checks API. JavaScript plug-ins in Gerrit fetch information from a CI system and supply it to the Gerrit web interface. Gerrit then displays the build results right there with the commit message and review comments. The interface shows every build configuration separately, and even build output like compile errors or test failures are right there in Gerrit. The Gerrit project provides an example of what it can look like:

So far, there is no plug-in for Buildbot publicly available, so we developed our own. Now, when a developer opens a change on Gerrit, our JavaScript plug-in queries Buildbot’s REST API for builds. The script will automatically determine the correct instance of Buildbot, and whether the user has access to that particular instance at all.

We needed to come up with a few tricks to make this happen. First, Buildbot did not allow to efficiently query builds for a Gerrit change. We added that feature, provided the patch to Buildbot upstream, and deployed it on our instances. This, by the way, is not our first contribution to Buildbot. Over the years we contributed 40 patches. Second, we introduced an additional endpoint on the web server to gracefully check whether a Gerrit user has access to a particular Buildbot instance in the first place. We do this to avoid unnecessary log-in dialogs in the web interface. Third, we created a custom data store in Gerrit to map repositories to Buildbot instances.

The new Docker Compose configuration helped significantly with this: We could easily develop and test all of these changes on local development instances of Buildbot. Deployment to the production instances was also quick and efficient. Fundamentally, without the performance improvements that the new instances brought, this feature would have not been possible. Feedback by our developers has been great so far.

Conclusion

This is not all, of course. We are currently looking into using Docker and VM images to create reproducible build environments. Developers get access and can then debug build failures  in the exact same environment as the CI. We are also investigating ways to upstream the Gerrit plug-in.

We at KDAB consider best practices and efficient workflows to be a large part of creating great software. That is why we keep investing into better infrastructure, for ourselves and for our customers. If you’d like to learn details about our infrastructure or discuss similar projects, feel free to get in touch.

 

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post The Future of KDAB CI appeared first on KDAB.

Categories: FLOSS Project Planets

Matt Layman: Finish Teacher Checklist - Building SaaS with Python and Django #142

Planet Python - Wed, 2022-08-24 20:00
In this episode, we finished off the teacher checklist feature in the homeschool app. I tied together all the loose ends, checked the feature end to end, and wrote the unit tests to complete the whole effort.
Categories: FLOSS Project Planets

Kate - New Features - August 2022

Planet KDE - Wed, 2022-08-24 15:35

The 22.08 release of Kate hasn’t arrived for many users yet, but we already have new cool stuff for upcoming releases.

As our merge requests page shows, alone in August we got at least 66 new things done. Naturally that are not all new features, but bug fixes, too.

Pablo Rauzy was nice enough to provide some short videos for the enhancements he contributed! Thanks a lot for that, and thanks to all people that helped to work out the merge requests he submitted.

The align command

First, some feature that was added on the KTextEditor level, that means all applications using that framework, like Kate, KWrite, KDevelop, Kile, and Co. will profit.

The new command aligns lines in the selected block or whole document on the column given by a regular expression that you will be prompted for.

Your browser does not support the video tag :P

This was not only exposed as an UI action, it is available in the integrated command line as “alignon” and in the JS bindings, too. Refer to the Kate manual for more information.

Intuitive movements between split views

The Vi mode had this already via Vi commands, now it is available as some UI actions, too, that can be assigned shortcuts.

You can move between the individual split views by direction.

Your browser does not support the video tag :P Keyboard Macros Plugin

Last but not least, perhaps the most anticipated feature of these three: a macro recording plugin

We had requests for this since close to ever, now we have some initial version.

More details are in the manual, too, below a short demonstration.

Your browser does not support the video tag :P More?

There is more stuff done and still in work, if you skim over the list of merge requests.

A nice thing to mention is: Pablo Rauzy is a new contributor, that means it is feasible to really contribute non-trivial features to Kate without months or years of experience on our code base. Naturally C++/Qt/KF5 isn’t the easiest entry level, but it is nothing insurmountable and our contributors are for sure trying to help out new people to find their ways and get things done.

If you are in doubt, just read the above linked merge requests for the new things shown in this post.

Comments?

A matching thread for this can be found here on r/KDE.

Categories: FLOSS Project Planets

Oomph Insights: Keeping a Remote Team Connected: Oomph’s 2022 Summit

Planet Drupal - Wed, 2022-08-24 14:29
Being part of a 100% remote team has its ups and downs. While Oomph had remote employees well before the pandemic, we shifted to a fully distributed team as a result of it. And while the glorious flexibility we have to work from anywhere in the US is hugely supportive, it comes with some challenges — like feeling disconnected. As a personal team, relationship building is crucial to us, and we’ve used virtual and in-person gatherings for years to help us unite. While the pandemic sure did squash our ability to share physical space, it also reinforced the importance of supporting each other. So…
Categories: FLOSS Project Planets

Jonathan Dowland: Our Study, 2022

Planet Debian - Wed, 2022-08-24 11:41

Two years ago I blogged a photo of my study. I'd been planning to revisit that for a while but I'd been somewhat embarrassed by the state of it, but I've finally decided to bite the bullet.

Fisheye shot of my home office, 2022

What's changed

The supposedly-temporary 4x4 KALLAX has become a permanent feature. I managed to wedge it on the right-hand side far wall, next to the bookcase. They fit snugly together. Since I'd put my turntable on top, I've now dedicated the top row of four spaces to 12" records. (There's a close-up pic here).

My hi-fi speakers used to be in odd places: they're now on my desktop. Also on my desktop: a camera, repurposed as a webcam, and a 90s old Creative Labs beige microphone; both to support video conferencing.

The desktop is otherwise, largely unchanged. My Amiga 500 and Synthesiser had continued to live there until very recently when I had an accident with a pot of tea. I'm in two minds as to whether I'll bring them back: having the desk clear is quite nice.

There's a lot of transient stuff and rubbish to sort out: the bookcase visible on the left, the big one behind my chair on the right (itself to get rid of); and the collection of stuff on the floor. Sadly, the study is the only room in our house where things like this can be collected prior to disposal: it's disruptive, but less so than if we stuffed them in a bedroom.

You can't easily see the "temporary" storage unit for Printer(s) that used to be between bookcases on the right-hand wall. It's still there, situated behind my desk chair. I did finally get rid of the deprecated printer (and I plan to change the HP laser too, although that's a longer story). The NAS, I have recently moved to the bottom-right Kallax cube, and that seems to work well. There's really no other space in the Study for the printer.

Also not pictured: a much improved ceiling light.

What would I like to improve

First and foremost, get rid of all the transient stuff! It's a simple matter of not putting the time in to sort it out

If I manage that, I've been trying to think about how to best organise material relating to ongoing projects. Some time ago I salivated over this home office tour for an embedded developer. Jay has an interesting project tray system. I'm thinking of developing something like that, with trays or boxes I can store in the Kallax to my right.

I'd love to put a comfortable reading chair, perhaps a wing-backed thing, and a reading light, over on the left-hand side near the window. And/or, a bench at a height enabling me to do the occasional bit of standing work, and/or to support the Alesis Micron (or a small digital Piano).

Categories: FLOSS Project Planets

Emmanuel Kasper: Investigating database replication in different availability zones

Planet Debian - Wed, 2022-08-24 11:09

Investigating today what is AWS Relational Database Service with two readable standbys

Considering your current read/write server is in Availability Zone AZ1, this is basically postgres 14 with synchronous_standby_names = ANY 1 (az2, az3) and synchronous_commit = on.

In regards to safety of data, it looks similar to the raft algorithm used by etcd with three members as a write is only ack’ed if it has been fsynced by two servers, the difference is that raft has a leader election, whereas in PostgreSQL the leader is set at startup and you have to build yourself the election mechanism.

There is no special cloud magic here, it is just database good practices paid by the minute.

Categories: FLOSS Project Planets

Real Python: Python's exec(): Execute Dynamically Generated Code

Planet Python - Wed, 2022-08-24 10:00

Python’s built-in exec() function allows you to execute arbitrary Python code from a string or compiled code input.

The exec() function can be handy when you need to run dynamically generated Python code, but it can be pretty dangerous if you use it carelessly. In this tutorial, you’ll learn not only how to use exec(), but just as importantly, when it’s okay to use this function in your code.

In this tutorial, you’ll learn how to:

  • Work with Python’s built-in exec() function
  • Use exec() to execute code that comes as strings or compiled code objects
  • Assess and minimize the security risks associated with using exec() in your code

Additionally, you’ll write a few examples of using exec() to solve different problems related to dynamic code execution.

To get the most out of this tutorial, you should be familiar with Python’s namespaces and scope, and strings. You should also be familiar with some of Python’s built-in functions.

Sample Code: Click here to download the free sample code that you’ll use to explore use cases for the exec() function.

Getting to Know Python’s exec()

Python’s built-in exec() function allows you to execute any piece of Python code. With this function, you can execute dynamically generated code. That’s the code that you read, auto-generate, or obtain during your program’s execution. Normally, it’s a string.

The exec() function takes a piece of code and executes it as your Python interpreter would. Python’s exec() is like eval() but even more powerful and prone to security issues. While eval() can only evaluate expressions, exec() can execute sequences of statements, as well as imports, function calls and definitions, class definitions and instantiations, and more. Essentially, exec() can execute an entire fully featured Python program.

The signature of exec() has the following form:

exec(code [, globals [, locals]])

The function executes code, which can be either a string containing valid Python code or a compiled code object.

Note: Python is an interpreted language instead of a compiled one. However, when you run some Python code, the interpreter translates it into bytecode, which is an internal representation of a Python program in the CPython implementation. This intermediate translation is also referred to as compiled code and is what Python’s virtual machine executes.

If code is a string, then it’s parsed as a suite of Python statements, which is then internally compiled into bytecode, and finally executed, unless a syntax error occurs during the parsing or compilation step. If code holds a compiled code object, then it’s executed directly, making the process a bit more efficient.

The globals and locals arguments allow you to provide dictionaries representing the global and local namespaces in which exec() will run the target code.

The exec() function’s return value is None, probably because not every piece of code has a final, unique, and concrete result. It may just have some side effects. This behavior notably differs from eval(), which returns the result of the evaluated expression.

To get an initial feeling of how exec() works, you can create a rudimentary Python interpreter with two lines of code:

>>>>>> while True: ... exec(input("->> ")) ... ->> print("Hello, World!") Hello, World! ->> import this The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. ... ->> x = 10 ->> if 1 <= x <= 10: print(f"{x} is between 1 and 10") 10 is between 1 and 10

In this example, you use an infinite while loop to mimic the behavior of a Python interpreter or REPL. Inside the loop, you use input() to get the user’s input at the command line. Then you use exec() to process and run the input.

This example showcases what’s arguably the main use case of exec(): executing code that comes to you as a string.

Note: You’ve learned that using exec() can imply security risks. Now that you’ve seen the main use case of exec(), what do you think those security risks might be? You’ll find the answer later in this tutorial.

You’ll commonly use exec() when you need to dynamically run code that comes as a string. For example, you can write a program that generates strings containing valid Python code. You can build these strings from parts that you obtain at different moments in your program’s execution. You can also use the user’s input or any other input source to construct these strings.

Once you’ve built the target code as strings, then you can use exec() to execute them as you would execute any Python code.

In this situation, you can rarely be certain of what your strings will contain. That’s one reason why exec() implies serious security risks. This is particularly true if you’re using untrusted input sources, like a user’s direct input, in building your code.

Read the full article at https://realpython.com/python-exec/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Drupal Core News: Drupal 10.0.0-beta1 and Drupal 9.5.0-beta1 will be released the week of September 12, 2022

Planet Drupal - Wed, 2022-08-24 09:39

In preparation for the new major release, Drupal 10.0.x and Drupal 9.5.x will enter the beta phase the week of September 12, 2022.

Core developers should plan to complete changes that are only allowed in major releases before the beta release. The Drupal 10.0.0-beta1 and Drupal 9.5.0-beta1 deadline for most core patches is September 9, 2022.

Developers and site owners can begin testing 10.0 and 9.5 once the betas are released.

During the beta phase, core issues will be committed according to the policy on allowed changes during beta.

  1. Most issues that are allowed for patch releases will be committed to 10.1.x and backported to 10.0.x, 9.5.x, and 9.4.x.

  2. Most issues that are only allowed in minor releases will be committed to 10.1.x only. A few strategic issues (such as PHP 8.2 or Symfony 6.2 compatibility fixes) may be backported to 10.0.x and 9.5.x, but only at committer discretion after the issue is fixed in 10.1.x (so leave them set to 10.1.x unless you are a committer), and only up until the release candidate deadline.

The 10.1.x branch of core has already been created, and future feature and API additions are already targeted against that branch instead of 10.0.x.

Further beta releases may be made available as needed.

The release candidate phase will begin the week of November 14. See the Drupal 10.0.0 requirements for the issues required between 10.0.0-beta1 and 10.0.0-rc1 or 10.0.0.

The scheduled release date for Drupal 10.0.0 and 9.5.0 is December 14, 2022.

Bugfix and security support of Drupal 9

Security coverage for Drupal 9 is generally provided for the previous minor release as well as the newest minor release. Based on these the following changes are upcoming:

Drupal 9.3.x Security releases will be provided until the release of Drupal 9.5.0 on December 14, 2022. Drupal 9.4.x Normal bugfix support ends on December 14, 2022. Security releases are provided until the release of Drupal 10.1.0 on June 21, 2023. Drupal 9.5.x Normal bugfix support ends on June 21, 2023 with the release of Drupal 10.1.0. Security releases are provided until the end of life of Symfony 4 in November 2023.

Categories: FLOSS Project Planets

Python for Beginners: Delete Attribute From an Object in Python

Planet Python - Wed, 2022-08-24 09:00

Python is an object-oriented programming language. We often use objects defined with custom classes while programming. In this article, we will discuss how we can delete an attribute from an object in Python. 

Delete Attribute From an Object using the del statement in Python

The del statement can be used to delete any object as well as its attributes. The syntax for the del statement is as follows.

del object_name

To see how we can delete an attribute from an object, let us first create a custom class Person with the attributes name, age, SSN, and weight.

class Person: def __init__(self, name, age, SSN, weight): self.name = name self.age = age self.SSN = SSN self.weight = weight

Now we will create an object named person1 of the Person class. After that, we will delete the attribute weight from the person1 object using the del statement as shown below. 

class Person: def __init__(self, name, age, SSN, weight): self.name = name self.age = age self.SSN = SSN self.weight = weight def __str__(self): return "Name:" + str(self.name) + " Age:" + str(self.age) + " SSN: " + str(self.SSN) + " weight:" + str( self.weight) person1 = Person(name="Will", age="40", SSN=1234567890, weight=60) print(person1) del person1.weight print(person1)

Output:

Name:Will Age:40 SSN: 1234567890 weight:60 Traceback (most recent call last): File "/home/aditya1117/PycharmProjects/pythonProject/string1.py", line 16, in <module> print(person1) File "/home/aditya1117/PycharmProjects/pythonProject/string1.py", line 10, in __str__ self.weight) AttributeError: 'Person' object has no attribute 'weight'

In the above example, you can see that we can print the attribute weight before the execution of the del statement. When we try to print the attribute weight after the execution of the del statement, the program runs into the AttributeError exception saying that there is no attribute named weight in the object. Hence, we have successfully deleted the attribute from the object using the del statement in python. 

Delete Attribute From an Object Using the delattr() Function in Python

We can also delete an attribute from an object using the delattr() function. The delattr() function accepts an object as its first input argument and the attribute name as its second input argument. After execution, it deletes the attribute from the given object. You can observe this in the following example.

class Person: def __init__(self, name, age, SSN, weight): self.name = name self.age = age self.SSN = SSN self.weight = weight def __str__(self): return "Name:" + str(self.name) + " Age:" + str(self.age) + " SSN: " + str(self.SSN) + " weight:" + str( self.weight) person1 = Person(name="Will", age="40", SSN=1234567890, weight=60) print(person1) delattr(person1, "weight") print(person1)

Output:

Name:Will Age:40 SSN: 1234567890 weight:60 Traceback (most recent call last): File "/home/aditya1117/PycharmProjects/pythonProject/string1.py", line 16, in <module> print(person1) File "/home/aditya1117/PycharmProjects/pythonProject/string1.py", line 10, in __str__ self.weight) AttributeError: 'Person' object has no attribute 'weight'

You can observe that we are able to print the weight attribute of the person1 object before the execution of the delattr() function. After execution of the delattr() function, the program raises the AttributeError exception when we try to print the weight attribute of the person1 object denoting that the attribute has been deleted.

If we pass an attribute name that doesn’t already exist in the object, it raises the AttributeError exception as shown below.

class Person: def __init__(self, name, age, SSN, weight): self.name = name self.age = age self.SSN = SSN self.weight = weight def __str__(self): return "Name:" + str(self.name) + " Age:" + str(self.age) + " SSN: " + str(self.SSN) + " weight:" + str( self.weight) person1 = Person(name="Will", age="40", SSN=1234567890, weight=60) print(person1) delattr(person1, "BMI") print(person1)

Output:

Name:Will Age:40 SSN: 1234567890 weight:60 /usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.7) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " Traceback (most recent call last): File "/home/aditya1117/PycharmProjects/pythonProject/string1.py", line 15, in <module> delattr(person1, "BMI") AttributeError: BMI

Here, you observe that we have tried to delete the BMI attribute from the person1 object that is not present in the object. Hence, the program runs into the AttributeError exception.

Conclusion

In this article, we have discussed two ways to delete an attribute from an object in python. To learn more about objects and classes, you can read this article on classes in python. You might also like this article on list comprehension in python. 

Suggested Readings:

The post Delete Attribute From an Object in Python appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

John Ludhi/nbshare.io: Join or Merge Lists In Python

Planet Python - Wed, 2022-08-24 06:38
Join or Merge Lists In Python

In this notebook, we will go through the following topics

Join / Merge two lists in python using + operator.
Join / Merge two lists in python using list.extend().
Join / Merge two lists in python using unpacking.
Join / Merge two lists in python using itertools.
Join / Merge two lists in python using for loop.

Join / Merge two lists in Python using + operator

Let us create random list of numbers using Python random module.

In [1]: import random random.seed(4) l1 = random.sample(range(0, 9), 5) l2 = random.sample(range(0, 9), 5) In [2]: print(l1) print(l2) [3, 4, 0, 5, 8] [7, 2, 0, 6, 5]

We can use + operator in Python to add two or more lists as long as the lists are of same type.

In [3]: l1 + l2 Out[3]: [3, 4, 0, 5, 8, 7, 2, 0, 6, 5] In [4]: print(l1) print(l2) [3, 4, 0, 5, 8] [7, 2, 0, 6, 5]

We can use the '+' operator on list of characters or strings too.

In [5]: l1 = 'john ludhi'.split(' ') In [6]: l1 Out[6]: ['john', 'ludhi'] In [7]: l2 = 'elon musk'.split(' ') In [8]: l2 Out[8]: ['elon', 'musk'] In [9]: l1 + l2 Out[9]: ['john', 'ludhi', 'elon', 'musk'] Join / Merge two lists in Python using list.extend() In [10]: import random random.seed(4) l1 = random.sample(range(0, 9), 5) l2 = random.sample(range(0, 9), 5) In [11]: print(l1) print(l2) [3, 4, 0, 5, 8] [7, 2, 0, 6, 5] In [12]: l1.extend(l2)

Let us print list l1

In [13]: print(l1) [3, 4, 0, 5, 8, 7, 2, 0, 6, 5]

We can use list.extend() method on list of strings or characters too.

In [14]: l1 = 'john ludhi'.split(' ') l2 = 'elon musk'.split(' ') In [15]: print(l1) print(l2) ['john', 'ludhi'] ['elon', 'musk'] In [16]: l1.extend(l2) In [17]: print(l1) ['john', 'ludhi', 'elon', 'musk'] Join / Merge two lists in Python using unpacking In [18]: l1 = 'john ludhi'.split(' ') l2 = 'elon musk'.split(' ') In [19]: mergelist = [*l1, *l2] In [20]: mergelist Out[20]: ['john', 'ludhi', 'elon', 'musk'] In [21]: import random random.seed(4) l1 = random.sample(range(0, 9), 5) l2 = random.sample(range(0, 9), 5) print(l1) print(l2) [3, 4, 0, 5, 8] [7, 2, 0, 6, 5] In [22]: mergelist = [*l1, *l2] print(mergelist) [3, 4, 0, 5, 8, 7, 2, 0, 6, 5] Join / Merge two lists in Python using itertools

We can use iterools module to chain two or more lists in Python

iterools.chain(list1,list2,...)

In [23]: import random random.seed(4) l1 = random.sample(range(0, 9), 5) l2 = random.sample(range(0, 9), 5) In [24]: import itertools result = itertools.chain(l1,l2) result Out[24]: <itertools.chain at 0x7f2c50787d30>

We can go through the above chain using for loop

In [25]: import itertools for l in itertools.chain(l1,l2): print(l) 3 4 0 5 8 7 2 0 6 5

Or we can convert in to a Python list.

In [26]: list(itertools.chain(l1,l2)) Out[26]: [3, 4, 0, 5, 8, 7, 2, 0, 6, 5]

We can chain multiple lists too with itertools.chain()

In [27]: list(itertools.chain(l1,l2,l2)) Out[27]: [3, 4, 0, 5, 8, 7, 2, 0, 6, 5, 7, 2, 0, 6, 5] Join / Merge two lists in Python using for loop In [28]: import random random.seed(4) l1 = random.sample(range(0, 9), 5) l2 = random.sample(range(0, 9), 5) In [29]: print(l1) print(l2) [3, 4, 0, 5, 8] [7, 2, 0, 6, 5] In [30]: for elem in l2: l1.append(elem) In [31]: print(l1) [3, 4, 0, 5, 8, 7, 2, 0, 6, 5]

To learn more about Python append, checkout following notebook
https://www.nbshare.io/notebook/767549040/Append-In-Python/

Categories: FLOSS Project Planets

Stack Abuse: Don't Use Flatten() - Global Pooling for CNNs with TensorFlow and Keras

Planet Python - Wed, 2022-08-24 06:30

Most practitioners, while first learning about Convolutional Neural Network (CNN) architectures - learn that it's comprised of three basic segments:

  • Convolutional Layers
  • Pooling Layers
  • Fully-Connected Layers

Most resources have some variation on this segmentation, including my own book. Especially online - fully-connected layers refer to a flattening layer and (usually) multiple dense layers.

This used to be the norm, and well-known architectures such as VGGNets used this approach, and would end in:

model = keras.Sequential([ #... keras.layers.MaxPooling2D((2, 2), strides=(2, 2), padding='same'), keras.layers.Flatten(), keras.layers.Dropout(0.5), keras.layers.Dense(4096, activation='relu'), keras.layers.Dropout(0.5), keras.layers.Dense(4096, activation='relu'), keras.layers.Dense(n_classes, activation='softmax') ])

Though, for some reason - it's oftentimes forgotten that VGGNet was practically the last architecture to use this approach, due to the obvious computational bottleneck it creates. As soon as ResNets, published just the year after VGGNets (and 7 years ago), all mainstream architectures ended their model definitions with:

model = keras.Sequential([ #... keras.layers.GlobalAveragePooling2D(), keras.layers.Dense(n_classes, activation='softmax') ])

Flattening in CNNs has been sticking around for 7 years. 7 years! And not enough people seem to be talking about the damaging effect it has on both your learning experience and the computational resources you're using.

Global Average Pooling is preferable on many accounts over flattening. If you're prototying a small CNN - use Global Pooling. If you're teaching someone about CNNs - use Global Pooling. If you're making an MVP - use Global Pooling. Use flattening layers for other use cases where they're actually needed.

Case Study - Flattening vs Global Pooling

Global Pooling condenses all of the feature maps into a single one, pooling all of the relevant information into a single map that can be easily understood by a single dense classification layer instead of multiple layers. It's typically applied as average pooling (GlobalAveragePooling2D) or max pooling (GlobalMaxPooling2D) and can work for 1D and 3D input as well.

Instead of flattening a feature map such as (7, 7, 32) into a vector of length 1536 and training one or multiple layers to discern patterns from this long vector: we can condense it into a (7, 7) vector and classify directly from there. It's that simple!

Note that bottleneck layers for networks like ResNets count in tens of thousands of features, not a mere 1536. When flattening, you're torturing your network to learn from oddly-shaped vectors in a very inefficient manner. Imagine a 2D image being sliced on every pixel row and then concatenated into a flat vector. The two pixels that used to be 0 pixels apart vertically are not feature_map_width pixels away horizontally! While this may not matter too much for a classification algorithm, which favors spatial invariance - this wouldn't be even conceptually good for other applications of computer vision.

Let's define a small demonstrative network that uses a flattening layer with a couple of dense layers:

model = keras.Sequential([ keras.layers.Input(shape=(224, 224, 3)), keras.layers.Conv2D(32, (3, 3), activation='relu'), keras.layers.Conv2D(32, (3, 3), activation='relu'), keras.layers.MaxPooling2D((2, 2), (2, 2)), keras.layers.BatchNormalization(), keras.layers.Conv2D(64, (3, 3), activation='relu'), keras.layers.Conv2D(64, (3, 3), activation='relu'), keras.layers.MaxPooling2D((2, 2), (2, 2)), keras.layers.BatchNormalization(), keras.layers.Flatten(), keras.layers.Dropout(0.3), keras.layers.Dense(64, activation='relu'), keras.layers.Dense(32, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) model.summary()

What does the summary look like?

... dense_6 (Dense) (None, 10) 330 ================================================================= Total params: 11,574,090 Trainable params: 11,573,898 Non-trainable params: 192 _________________________________________________________________

11.5M parameters for a toy network - and watch the parameters explode with larger input. 11.5M parameters. EfficientNets, one of the best performing networks ever designed work at ~6M parameters, and can't be compared with this simple model in terms of actual performance and capacity to learn from data.

We could reduce this number significantly by making the network deeper, which would introduce more max pooling (and potentially strided convolution) to reduce the feature maps before they're flattened. However, consider that we'd be making the network more complex in order to make it less computationally expensive, all for the sake of a single layer that's throwing a wrench in the plans.

Going deeper with layers should be to extract more meaningful, non-linear relationships between data points, not reducing the input size to cater to a flattening layer.

Here's a network with global pooling:

model = keras.Sequential([ keras.layers.Input(shape=(224, 224, 3)), keras.layers.Conv2D(32, (3, 3), activation='relu'), keras.layers.Conv2D(32, (3, 3), activation='relu'), keras.layers.MaxPooling2D((2, 2), (2, 2)), keras.layers.BatchNormalization(), keras.layers.Conv2D(64, (3, 3), activation='relu'), keras.layers.Conv2D(64, (3, 3), activation='relu'), keras.layers.MaxPooling2D((2, 2), (2, 2)), keras.layers.BatchNormalization(), keras.layers.GlobalAveragePooling2D(), keras.layers.Dropout(0.3), keras.layers.Dense(10, activation='softmax') ]) model.summary()

Summary?

dense_8 (Dense) (None, 10) 650 ================================================================= Total params: 66,602 Trainable params: 66,410 Non-trainable params: 192 _________________________________________________________________

Much better! If we go deeper with this model, the parameter count will increase, and we might be able to capture more intricate patterns of data with the new layers. If done naively though, the same issues that bound VGGNets will arise.

Going Further - Hand-Held End-to-End Project

Your inquisitive nature makes you want to go further? We recommend checking out our Guided Project: "Convolutional Neural Networks - Beyond Basic Architectures".

I'll take you on a bit of time travel - going from 1998 to 2022, highlighting the defining architectures developed throughout the years, what made them unique, what their drawbacks are, and implement the notable ones from scratch. There's nothing better than having some dirt on your hands when it comes to these.

You can drive a car without knowing whether the engine has 4 or 8 cylinders and what the placement of the valves within the engine is. However - if you want to design and appreciate an engine (computer vision model), you'll want to go a bit deeper. Even if you don't want to spend time designing architectures and want to build products instead, which is what most want to do - you'll find important information in this lesson. You'll get to learn why using outdated architectures like VGGNet will hurt your product and performance, and why you should skip them if you're building anything modern, and you'll learn which architectures you can go to for solving practical problems and what the pros and cons are for each.

If you're looking to apply computer vision to your field, using the resources from this lesson - you'll be able to find the newest models, understand how they work and by which criteria you can compare them and make a decision on which to use.

You don't have to Google for architectures and their implementations - they're typically very clearly explained in the papers, and frameworks like Keras make these implementations easier than ever. The key takeaway of this Guided Project is to teach you how to find, read, implement and understand architectures and papers. No resource in the world will be able to keep up with all of the newest developments. I've included the newest papers here - but in a few months, new ones will pop up, and that's inevitable. Knowing where to find credible implementations, compare them to papers and tweak them can give you the competitive edge required for many computer vision products you may want to build.

Conclusion

In this short guide, we've taken a look at an alternative to flattening in CNN architecture design. Albeit short - the guide addresses a common issue when designing prototypes or MVPs, and advises you to use a better alternative to flattening.

Any seasoned Computer Vision Engineer will know and apply this principle, and the practice is taken for granted. Unfortuntately, it doesn't seem to be properly relayed to new practitioners who are just entering the field, and can create sticky habits that take a while to get rid of.

If you're getting into Computer Vision - do yourself a favor and don't use flattening layers before classification heads in your learning journey.

Categories: FLOSS Project Planets

mark.ie: This is an AI Generated Article about "Drupal Security Best Practices"

Planet Drupal - Wed, 2022-08-24 05:58

While searching for a clickbait generator to post something funny on an internal team chat, I came across writecream.com which claims to write high quality AI-generated articles. Here's what it came up with for an article titled "Drupal Security Best Practices".

Categories: FLOSS Project Planets

Python Bytes: #298 "Unstoppable" Python

Planet Python - Wed, 2022-08-24 04:00
<p><strong>Watch the live stream:</strong></p> <a href='https://www.youtube.com/watch?v=Y0_XATQubXU' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Sponsored by <a href="http://pythonbytes.fm/foundershub2022"><strong>Microsoft for Startups Founders Hub</strong></a>.</p> <p><strong>Brian #1:</strong> <a href="https://eugeneyan.com/writing/uncommon-python/"><strong>Uncommon Uses of Python in Commonly Used Libraries</strong></a> <a href="https://eugeneyan.com/writing/uncommon-python/"></a></p> <ul> <li>by Eugene Yan</li> <li>Specifically, <a href="https://eugeneyan.com/writing/uncommon-python/#using-relative-imports-almost-all-the-time">Using relative imports</a></li> <li>Example from sklearn’s <code>base.py</code> from .utils.validation import check_X_y from .utils.validation import check_array</li> <li>“Relative imports ensure we search the <em>current</em> package (and import from it) before searching the rest of the <code>PYTHONPATH</code>. “</li> <li>For relative imports, we have to use the <code>from .something import thing</code> form. </li> <li>We cannot use <code>import .something</code> since later on in the code <code>.something</code> isn’t valid.</li> <li>There’s a <a href="https://peps.python.org/pep-0328/#guido-s-decision">good discussion of relative imports in pep 328</a></li> </ul> <p><strong>Michael #2:</strong> <a href="https://twitter.com/aikidouke/status/1549687841265008642?s=12&amp;t=LyRHbdObe2ee-wWTwuLslA"><strong>Skyplane Cloud Transfers</strong></a></p> <ul> <li>Skyplane is a tool for blazingly fast bulk data transfers in the cloud. </li> <li>Skyplane manages parallelism, data partitioning, and network paths to optimize data transfers, and can also spin up VM instances to increase transfer throughput.</li> <li>You can use Skyplane to transfer data: <ul> <li>Between buckets within a cloud provider </li> <li>Between object stores across multiple cloud providers </li> <li>(experimental) Between local storage and cloud object stores </li> </ul></li> <li>Skyplane takes several steps to ensure the correctness of transfers: Checksums, verify files exist and match sizes.</li> <li>Data transfers in Skyplane are encrypted end-to-end.</li> <li>Security: Encrypted while in transit and over TLS + config options</li> </ul> <p><strong>Brian #3:</strong> <a href="https://www.textualize.io/blog/posts/7-things-about-terminals"><strong>7 things I've learned building a modern TUI framework</strong></a></p> <ul> <li>by Will McGugan</li> <li>Specifically, <strong>DictViews are amazing.</strong> They have set operations.</li> <li><p>Example of using <code>items()</code> to get views, then <code>^</code> for symmetric difference (done at the C level):</p> <pre><code> # Get widgets which are new or changed print(render_map.items() ^ new_render_map.items()) </code></pre></li> <li><p>Lots of other great topics in the article</p> <ul> <li><code>lru_cache</code> is fast</li> <li>Unicode art in addition to text in doc strings</li> <li>The <code>fractions</code> module</li> <li>and a cool embedded video demo of some of the new css stuff in Textual</li> </ul></li> <li><a href="https://github.com/python/cpython/blob/4b4439daed3992a5c5a83b86596d6e00ac3c1203/Objects/obmalloc.c#L778"><strong>Python’s object allocator ascii art</strong></a></li> </ul> <p><strong>Michael #4:</strong> <a href="https://www.infoworld.com/article/3669232/python-popularity-still-soaring.html">‘Unstoppable’ Python</a></p> <ul> <li>Python popularity still soaring: ‘Unstoppable’ Python once again ranked No. 1 in the August updates of both the Tiobe and Pypl indexes of programming language popularity.</li> <li><a href="https://www.infoworld.com/article/3636789/python-tops-tiobe-language-index.html">Python first took the top spot in the index last October</a>, becoming the only language besides Java and C to hold the No. 1 position.</li> <li>“Python seems to be unstoppable,” said the Tiobe commentary accompanying the August index.</li> <li>In the alternative <a href="https://pypl.github.io/PYPL.html">Pypl Popularity of Programming Language index</a>, which assesses language popularity based on Google searches of programming language tutorials, Python is way out front.</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>Matplotlib stylesheets can make your chart look awesome with one line of code. <ul> <li>But it never occurred to me that I could write my own style sheet.</li> <li>Here’s an article discussing creation of custom matplotlib stylesheets <ul> <li><a href="https://www.datafantic.com/the-magic-of-matplotlib-stylesheets/"><strong>The Magic of Matplotlib Stylesheets</strong></a></li> <li><a href="https://jakevdp.github.io/blog/2013/07/10/XKCD-plots-in-matplotlib/"><strong>XKCD Plots</strong></a></li> </ul></li> </ul></li> </ul> <p>Michael:</p> <ul> <li>Back <a href="https://pythonbytes.fm/episodes/show/295/flutter-python-gui-apps">on 295</a> we talked about <a href="https://flet.dev">Flet</a>. We now have a Talk Python episode on it (<a href="https://www.youtube.com/watch?v=kxsLRRY2xZA">live</a> and <a href="https://talkpython.fm/episodes/show/378/flet-flutter-apps-in-python">polished</a> versions).</li> </ul> <p><strong>Joke:</strong> <a href="https://twitter.com/PR0GRAMMERHUM0R/status/1550254320637157379">Rakes and AWS</a></p>
Categories: FLOSS Project Planets

KDE/Plasma for Debian – Update 2022/8

Planet KDE - Wed, 2022-08-24 01:41

Monthly update on KDE/Plasma on Debian: Updates to Frameworks and KDE Gears

I have packaged KDE Gears 22.08 as well as the latest frameworks, and Plasma got a point release. The status is as follows (all for Debian/stable, testing, unstable:

  • Frameworks: 5.97
  • Gears: 22.08.0
  • Plasma 5.25: 5.25.4
  • Plasma 5.24 LTS: 5.24.6

I repeat (and update) instructions for all here, updated to use deb822 format (thanks to various comments on the blog here):

  • Get the key via curl -fsSL https://www.preining.info/obs-npreining.asc | sudo tee /usr/local/share/keyrings/obs-npreining.asc
  • Add the sources definition in /etc/apt/sources.lists.d/obs-npreining-kde.list, replacing the DISTRIBUTION part with one of Debian_11 (for Bullseye), Debian_Testing, or Debian_Unstable: # deb822 source: # https://www.preining.info/blog/tag/kde/ Types: deb URIs: https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/DISTRIBUTION https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/DISTRIBUTION https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma525/DISTRIBUTION https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2208/DISTRIBUTION https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/DISTRIBUTION Suites: / Signed-By: /usr/local/share/keyrings/obs-npreining.asc Enabled: yes

Warnings/Todos:

  • Update of the other repository to compile against Gears 22.08
  • kopete and ktorrent don’t compile. If you can fix that, please let me know.

Enjoy!

Usual disclaimer: (1) Considering that I don’t have a user-facing Debian computer anymore, all these packages are only tested by third parties and not by myself. Be aware! (2) Funny to read the Debian Social Contract, Point 4. Our priorities are our users and free software, obviously I care a lot about my users, more than some other Debian members.

Categories: FLOSS Project Planets

Factorial.io: A new edition of Splash Awards, finally

Planet Drupal - Tue, 2022-08-23 20:00

After the last year’s edition of the German Splash Awards had to be cancelled due to the pandemic, we are happy to announce that we will host the Splash Awards Germany and Austria 2022’s award ceremony in our office in November.

Categories: FLOSS Project Planets

Pages