FLOSS Project Planets

Larry Garfield: The end of an era

Planet Drupal - Fri, 2016-04-22 14:06

Today is the end of an era. After just over ten and a half years, this is my last day with Palantir.net.

The past decade has seen Palantir grow from a company of 5 to a company of over 30. From a company that wouldn't touch the GPL with a ten foot pole to a strong advocate for Open Source, Free Software, and Drupal in particular. From a company that did mostly subcontracting work for design firms to an end-to-end soup-to-nuts agency. From having two desktop screen sizes anyone cared about to an infinite scale of screens from 3"-30". From a world where IE 5 for Mac was considered a good browser to one where once again, the latest Microsoft browser is actually good. (Everything old is new again, I suppose.)

After ten years with the same company (which in Internet years is about a millennium) I certainly have stories. There's plenty I could say about Palantir, most of it good. :-) In the end, though, there's one thing in particular that has kept me here for so long.

Palantir.net is the kind of place that has your back.

read more

Categories: FLOSS Project Planets

Acquia Developer Center Blog: Talking Drupal 8: Lightning Distribution and Module Acceleration Program

Planet Drupal - Fri, 2016-04-22 12:47

John Kennedy and I spoke about two exciting Drupal 8 projects he's running at Acquia in 2016. He's the Program Manager of the $500,000 US dollars Acquia is investing to upgrade important modules from Drupal 7 to 8 as part of the Drupal 8 Module Acceleration Program. He's also the Product Manager of Acquia's Enterprise Authoring Drupal 8 distribution, Lightning.

"I think Drupal is great at a lot of things ... and if we want to create a world where regular people can create great experiences, not just developers, Drupal is a fantastic application for that." - John Kennedy

Interview video - 25 min. - Transcript Below

Guest dossier
How did you discover Drupal?

John Kennedy: Well, I was doing a little bit of work for a nonprofit organization called Vibewire maybe back in 2006 and they said to me, “We’ve got this website, it keeps going down and we really need to have you look at it for us.” I looked at it, it was on Drupal 4.7, loaded up to the brim with modules and I said “This is awful” and promptly migrated them to Plone.

jam: Thank you, have a nice day.

John Kennedy: Since then, I’ve had some better experiences with Drupal. I ended up running my own Drupal shop for a while and I then came out to the UK to actually start the UK operation of the Commerce Guys and I did that for a little while, and then Acquia brought me on to be head of solutions architecture for Europe and now they’ve brought me over here.

jam: Fill in the blank here between 2006 and 2016, from being a big Plone fan to actually sticking with Drupal all these years. What changed for you?

John Kennedy: I don’t know that I was a big Plone fan. I was a big open source fan. I’d been a systems administrator and I’d been using the range of tools on top of Linux for a long time. Plone at the time seemed more mature. I had some developers who I could use for Plone, but what happened was that I found a couple of projects that were really suitable for Drupal and I worked out how to use it, nontrivial at the time, at least ... still ... and then once I was dug in, I found it more and more useful and I really got in touch with the community. I started coming to DrupalCons. The first DrupalCon I came to was Chicago and I hadn’t missed any since until I had my son 17 months ago and then I’ve missed a couple.

The power of the Drupal Site-Builder

John Kennedy: Absolutely. Drupal creates this role that exists in other ecosystems, but it’s really clear in Drupal of site-builder, and it’s someone who can be, but is not necessarily a developer, and can be, but is not necessarily an author, and they actually create experiences by assembling modules, assembling functionality, and that could be layouts with Panels or it could be business logic with Rules or it could be a range of other functionality bringing it in through the module ecosystem. I think that role is incredibly powerful because it allows little organizations and large organizations to much better leverage their expertise to build great experiences, to build complicated functionality.

It also facilitates this amazing ecosystem of people who scratch their own itch, but also contribute to a wider modular functionality and it’s a lot more, I would say, sophisticated than the module ecosystems you see in things like with Ruby Gems or necessarily just the wider Composer or PHP ecosystem because it actually takes into account that there’s an end user that needs an administration interface and needs guidance on how to implement this. It’s not just a piece of code that you plug in. It’s also, by default, an administration interface and general principles that allow it to slot into Drupal really easily.

jam: I’m going to try and boil that down. What I tell people often in this context is Drupal has taken the incredible power and flexibility of a lot of great code built by a lot of great developers and made a fundamental design decision along the way. “We are going to put the power of that code in the hands of less technical end users and make it available to anyone who wants to build community, build a business and so on.” So the fundamental design decision is empower the user-interface-user and not just the developer, is that fair?

John Kennedy: Yes, I think so. I think when we think about WordPress, they’re definitely empowering a user. They’re empowering an author, but it’s really just an author. It’s someone who just wants to publish simple content or who wants to write words or add media. Drupal enables an expert user - it’s not necessarily a coder, but it’s someone who has a little bit more knowledge and then can create something really sophisticated.

What is Lightning for?

John Kennedy: So the way I like to talk about lightning is that it’s not an out-of-the-box distribution. It’s a framework. It’s a way to cut 20% off any large project that wants to achieve a great authoring experience. Our tagline for it is "enabling developers to create great authoring experiences and empowering editorial teams". It’s not meant to be a beautiful instant experience like Atrium. It’s really meant to be a set of principles, frameworks, code, best practice, documentation that developers can take to leverage their time, to not have to think about that core set of functionality around authoring which, as we defined it, is layout, preview, media integration and workflow. Within those categories, we enable use cases like putting groups of content through a work flow and being able to preview them.

So you can enable scenarios like the election night scenario or the Super Bowl scenario where I might have two or three groups of content that I write and I want to be able to preview, but only one of them is ever going to get published. That was really hard previously, but we brought together in Lightning a few different modules to enable that, bringing together work spaces and workbench moderation and a couple of other things to allow for that. You can do so much more than we’ve done, but by giving you that little piece, I think we’ve enabled some really interesting use cases.

So there are a few of those different things that we’d brought together. It’s really about the fact that we think those four functionalities should be tightly coupled for enterprise authoring, not all the time, but specifically if you’re a large organization with many authors and you’ve got a sophisticated authoring process, then those functionalities should be tightly coupled because they should integrate with each other for that use case.

jam: So it’s a time saver for developers and they also believe it’s an opinionated way of integrating functionality within Drupal, and I imagine - am I right in saying that if people apply it, it will also save a lot of time on maintenance and handing off projects between different dev shops because a set of universal and good choices has been made along the way?

John Kennedy: Absolutely. Our internal PS team now, when they launch a project, they use Lightning by default.

jam: Even if it’s not an authoring heavy site, is that what they’ll start with anyway?

John Kennedy: I would say that Acquia clients tend to have authoring needs, so I don’t think Lightning should be necessarily the 80% use case for all Drupal shops out there, but it is for Acquia because that’s the kind of client we have. I think if you’re thinking about enterprise authoring, not just to post a blog, in fact not just multiple content types and interesting views, I think I’m really talking about if you have multiple authors and work flows and you need to create lots of different layouts and landing pages and all these kinds of slightly more difficult use cases, then yes, you should be thinking about Lightning.

jam: This is fully open source. When can I get it and how can I make it better?

John Kennedy: Drupal.org. In fact, we are being as transparent as we can possibly be. We publish all of our release notes, but also our forward releases have their stories on Drupal.org, so you can go see what we’re targeting, if we’ve already covered that, the issues. If they’re closed, they’re going to have the little line through them like they do on Drupal.org and we’re publishing as much as we can. So really, people aren’t just helping us fix problems, but influencing our forward roadmap as well. We want to hear what people want.

jam: That’s great, okay. So you’re combining deep technical knowledge and intuition with actual reports from people who do this every day.

John Kennedy: That’s it.

The Lightning Top 3

jam: What would you say are the top three things about Lightning that makes it really ideal for your targets with its use cases?

John Kennedy: So if developers are talking to their managers about why they should use Lightning, the obvious ones are the feature sets. I talked about that before, work flow preview, layout, media, but actually what makes this brilliant for developers is a set of principles that we’ve come up with to build Lightning.

  1. One of those is that you never have to undo anything. So a lot of distributions, you’ve got to go in and you’ve got to undo configuration or things that they’ve decided upon earlier and that makes it less useful for you. We’ve really taken an unopinionated approach so that you can take Lightning and build forward.
  2. The next one is automated testing. We’ve built Behat tests into all of our major functionality and that means that as you build on top of Lightning, you can actually test whether you’re breaking any of our stuff which is really useful when you’re building an enterprise authoring system.
  3. The final one is upgradability and this is controversial because a lot of distributions aim at this and don’t quite get there, but we think by keeping a small-ish core of Lightning, we can actually maintain an upgrade path going forward. That means that if you build on Lightning, you can actually get free features as we upgrade Lightning. Those won’t necessarily be turned on automatically, but as a developer, as you upgrade Lightning, you have the ability to incorporate more of this functionality into the experience that you’re giving your users.

jam: So the first part of those really react to problems that we’ve all faced in one situation or another on the web in Drupal consulting and this idea of the build-it-forward and upgradability built into the distro really hits - if you can nail that, that really hits some pain points that I’ve certainly experienced along the way. Cool, great. So build-it-forward sounds like a good way to sum that up.

John Kennedy: Yes. It’s hard. What we’re doing is hard and we recognize that and there’ll be a lot of time spent maintaining upgradability and those concepts, but we think they’re key to the success of the distribution, so we’re putting the effort in.

jam: Now, in real time it is early March 2016. We’ve had Drupal 8 for a few months now and Drupal 8.1 is coming up. Is the Lightning Drupal 8 version already out and ready?

John Kennedy: So we’re in beta. I think we’re in beta four right now and as I said, our professional services department is already using our beta for building real sites, but we’re not going to release our GA version of Lightning until the 31st of March [this has slipped a bit since we recorded this conversation]. There’re a couple more features we want to put in there, things we want to tighten up, and actually we’re now thinking about what that looks like in 8.1 and how we bring 8.1 into our platform as well.

The Drupal 8 Module Acceleration Program

jam: So I’ve been doing Drupal, I noticed, for 11 years and I’ve figured out recently that a huge percentage of our community has never experienced a new major version release. Drupal 7 was our major version for more than five years and while we did this enormous rebuild, re-architecture, that has produced something really wonderful - I think Drupal 8 is fantastic. I’m enjoying every moment I use it.

Those of us who’ve been in since 4.6 which is actually - so we’ve been doing Drupal about the same amount of time. We’ve seen a lot more releases. For example, the release of Drupal 6, when it came out, was completely unusable. Every Drupal 5 site worth talking about used panel views CCK, and at the time, the views maintainer did not want to port Views One to Drupal 6 and took another six or eight months to finish Views Two for Drupal 6, and Drupal 6 was kind of dead in the water for a long time.

Drupal 7 had really good uptake, it was a solid release, but still we had this situation where it took a year for the contributed module space to really, really catch up with Drupal 7. So I want to point out a couple things about Drupal 8 that make it a much more usable release than we’ve ever seen before. Drupal 8 was completely and thoroughly tested from the first cutting of the branch, every single patch that was applied, so it’s really functionally solid. You can do a lot more with Core. Views is in Core, multilingual is in Core, and it’s very compact and like a ton of boilerplate has been taken out, you can make your own admin interface because it’s all Views. I mean it’s really very powerful and flexible and you can do great sites already now

But there’s also a lot more economic value hanging on the Drupal ecosystem for a major release than has ever happened before, and a lot of us--Acquia, community, shops, clients all around the world--don’t want to risk Drupal 8 failing as a release. We want this uptake to happen a lot faster. So I think the Lightning distribution coming from Acquia and being used in customer projects already being released on March 31st [and/or soon thereafter!] sends a signal, “Hey, this is ready to use.” One of my projects right now is writing the Drupal 8 Module of the Week series which is also - first of all, I want to celebrate the people who’ve done this work to bring this modules, to create these modules, to make them available for Drupal 8, but we also want to tell people, “Hey, use this thing. It’s awesome and you can do all these different things with it.”

Your other big project, you’re leading the Module Acceleration Program, Acquia’s Module Acceleration Program for Drupal 8. You’ve written some blog posts about it. You want to talk about what that is and how it addresses what I’ve just been going on and on and on about?

John Kennedy: Sure thing. So we did think a lot about Drupal 7’s adoption path and I was around and I made mistakes in 6 and I made mistakes in 7 and some of those mistakes were taking on modules early when they were not ready for release and I think lots of people being burnt by that process means they wait a little while before taking up a new major version.

What we wanted to do was take the great work of the community and bring modules to production readiness, giving people confidence to go out and implement them and use the functionality they’d expect in Drupal 7 in Drupal 8. So Views is in Core, so that’s less of an issue, but there are a whole range of modules that people use all the time and we have lots of experience and lots of channels to hear what people need through our clients, through our partners, through all of the community members we have in Acquia and we really ran a process where we looked at what were the top 50 modules that were going to make the biggest impact on Drupal 8’s usability?

We built that list; that was in conjunction with Angie and a range of other people at Acquia and we looked at how do we do that? How do we bring them to production? What we found was that all we needed to do was talk to people who already wanted to do them and give a little bit of funding so that they could take the time to do it. So that meant going to some of our partners like Palantir and like Lullabot and other shops who really wanted to give the time, and they gave us a really low community rate to go and do what they already wanted to do. We wanted them done, they wanted them done ...

jam: So you’re threading the needle somewhere between what a developer costs in a commercial setting, building a client project and the pure volunteerism which, for better or worse, most of our community wants to contribute, most of our community wants to make a difference, but when you’ve got paid work and volunteer work, the volunteer work happens whenever it can, right?

John Kennedy: Yes.

jam: So threading this needle, how did you figure out what a community development rate was that’s good enough to people to focus on helping the whole community by upgrading these functionalities?

John Kennedy: There was already a rate out there that we kind of went by as a rough approach, but there’re also people wanting to give their time and just said “Look, I can do it and this is what I can do it for.” A lot of that time, that was even better. So we brought that group of people together and it was made up of maybe 11 externals who were working on contract for us and maybe another five within Acquia that were doing work just as part of their jobs, and then maybe another 5 to 10 from the community who had parallel projects that had needs for these modules and actually just needed to get these done.

What we did then was we helped coordinate all those people. So we started running ... our internal scrum is daily, but then we had an external scrum which was weekly. Then we broke out and had a work flow scrum so that those people who are really interested in that could gather around that. In fact, we kicked a lot of this off at the summit that we had at BADCamp last year where we brought together one subgroup of the wider group which was around authoring. I was really interested in how Lightning was going to get done and a lot of other people were interested in how other parts of authoring were going to get done.

So we all got together at BADCamp and came up with the needs for this authoring experience and that fed into a lot of what we built as well, but it was really a lot of community engagement, a lot of going out and talking to the maintainers of modules and working out whether they had the time to actually do the ports themselves, or whether they could share maintainership, or whether they could support an effort, whether they could do patch reviews and things like this for our contractors and our contributors.

I think that was a lot of the goal in the project. It was great to get great rates from people and great to work with lots of the community, but that coordination effort meant we just steamed ahead in terms of getting modules done. I don’t want to take credit for ... I don’t think even the program should take credit for doing whole module ports. We took a lot of code that existed in porting, but what we did was we got it production-ready. We got it out there, we got it to the beta, we got it so that people use it in projects.

jam: One of the things that I’ve been really proud of over the years being part of Acquia is where the rubber really, really hits the road. Acquia has gotten open source right and especially on the contribution side, the first thing Dries did was hire Gábor to get Drupal 6 out the door in the best shape possible. We worked with really early versions of Drupal 7 when I was in Engineering building Drupal Gardens and we made some big mistakes around module upgrades at the time and we’ve learned from that and have apologized, I believe, along the way, but when the rubber hits the road, Acquia has done an awful lot of things that have helped the entire project and our community. I’m really, really proud of that as a Drupalist.

In this case, as you’ve written in the blog, Acquia’s invested US $500,000.00 in these upgrades, and frankly the fact that we get production-ready modules and our community gets some more rent paid, I think that’s a great outcome for everybody. How do you think this investment is going to - what’s the return on this investment going to be for us and for Drupal?

John Kennedy: We talked before about how a complete module ecosystem accelerates Drupal. I think the big return that we’ll see is people coming into Drupal 8 earlier, so existing Drupalists, but also people who are evaluating Drupal, seeing the functionality ready and getting onto Drupal, and we’ll see that curve. That exponential growth that we’ve seen in 7, we’re going to see that earlier. That’s the big return.

I think what keeps us all up at night is let’s see the big acceleration of Drupal 8. I think I’ve seen some early stats and there’re some really good news about what Drupal 8 is doing in terms of if we look at the path of Drupal 7, it looks like we’re actually double the amount of sites at the same stage. So that’s pretty good, but what we saw over a long period of time was this about 76% year-on-year growth of Drupal 7 and that brought us from a community that was already significant to one that ran 3% of the web. It was huge and the 35,000-odd active contributors now, a lot of them came onboard in that cycle and we really want to see that happen again. We’re going to bring back the excitement to Drupal.

I think Drupal is great at a lot of things and it’s still really relevant in this day and age where we are looking at more and more sophisticated use cases for authoring and for web applications, and if we want to create a world where regular people can create great experiences, not just developers, Drupal is a fantastic application for that.

jam: Well, thank you for coordinating all of this. Thank you for taking the time to talk with me today. Now get back to work and make Drupal 8 production-ready.

John Kennedy: Sure thing.

Podcast series: Drupal 8Skill Level: IntermediateAdvanced
Categories: FLOSS Project Planets

Drupal Commerce: Upgrade paths between Drupal 8 module versions

Planet Drupal - Fri, 2016-04-22 11:11

Over time, modules update their shipped configuration such as rules, views, node types. Users expect to get these updates when they update the modules. For example, Drupal Commerce 1.x provides over eleven default views and rules. From release to release, these Views and Rules were enhanced with new features or bug fixes reported by the community.

Take the order management view. In a future release of Drupal Commerce, we may add a newly exposed filter to the view. In Drupal 7 the view would automatically be updated unless it was overridden. That is not the case in Drupal 8.

Drupal 8 introduces the Configuration Management system. While robust, it changes how we work with the configuration in comparison to Drupal 7. Instead of a module owning configuration, configuration is now owned by the site. When a module is installed, its configuration is imported. On module update, no configuration updates are performed. Instead, the module must write an update hook to perform one of the following updates:

Categories: FLOSS Project Planets

Gabriel Pettier: Position/Size of widgets in kivy.

Planet Python - Fri, 2016-04-22 08:07

I see it’s a common struggle for people to understand how to manage size and positions of the widgets in kivy.

There is not much to it, really, just a few principles to understand, but they are better explained by example, i believe.

dumb positionning

Let’s create a root widget, and put two buttons in it.

root = Widget() b1 = Button() b2 = Button() root.add_widget(b1) root.add_widget(b2)

What will happen when i return root to be used as my root widget? As a root widget, it will have all the space for itself, the two buttons, however, will only use their default size, which is (100, 100), and will use their default position, which is (0, 0), the bottom-left corner of the screen.

We can change b1 and b2 positions to absolute values, let’s change line 2 and 3 to:

b1 = Button(pos=(100, 100)) b2 = Button(pos=(200, 200)) something a little smarter

Now, that doesn’t make for a very flexible UI, if you ask me, it would be better to use the parent size to decide where to place the buttons, no?

b1 = Button(pos=(root.x, root.height / 2)) b2 = Button(pos=(root.width - 100, root.height / 2))

No, that’s not much smarter, it’s still quite dumb actually, it doesn’t even work, why? because we just instanciated root, it’s not even the root widget yet, it doesn’t have its final size, so our widgets will use the default value of (100, 100), for their calculations, no - no.

Let’s fix this with bindings.

b1 = Button() root.bind(size=reposition_b1, pos=reposition_b1) b2 = Button() root.bind(size=reposition_b2, pos=reposition_b2)

Where these two functions will be along the lines of:

def reposition_b1(root, *args): b1.pos = root.x, root.height / 2 - b1.height / 2

Also, widget have nice alias properties for things like height / 2, x + width, x + width / 2, here is a quick table:

right = x + width top = y + height center_x = x + width / 2 center_y = y + height / 2 center = (center_x, center_y)

and actually, pos is just an alias property for (x, y). Alias Properties works both ways, so we can use them to set our positions in a simpler way.

def reposition_b1(root, *args): b1.x = root.x b1.center_y = root.center_y

A lot of work for not so much, right? That’s because we are not using the right tools!

Let’s jump on a layout, specifically FloatLayout.

FloatLayout to the rescue

Layouts are here to make our life easier when constructing an UI.

FloatLayout is very flexible, it lets you set rules for positions, or do things in the absolute way.

root = FloatLayout() b1 = Button(pos_hint={'x': 0, 'center_y': .5}) b2 = Button(pos_hint={'right': 1, 'center_y': .5})

pos_hint will make the values relative to the size/position of the parent. So here, for b1 x will be the x of the parent, and center_y will be at the middle between y and top.

Now, if you run this, you may get a surprise, because FloatLayout also made b1 and b2 sizes relative to root.size, and the default value being (1, 1), they both have the same size as the root widget, sometime you want to keep that relative, maybe with a different value, sometime you want to have them keep the value passed in size (or default size value), if so, you can pass (None, None) to size_hint.

b1 = Button(pos_hint={'x': 0, 'center_y': .5}, size_hint=(None, None)) Conclusion

I’ll probably do other posts on the subject, because there is much more to it, but the basic principles are here. I anyway really encourage you to look at this section of the documentation, which go further with the various kind of layouts you have access to.

Categories: FLOSS Project Planets

OpenLucius: 11 Cool Drupal modules for site builders | April 2016

Planet Drupal - Fri, 2016-04-22 03:59

Let’s start straight away with the things that struck me about Drupal module updates of last month:

1. Image effects

The Drupal 8 and Drupal 7 core both have features to facilitate graphics, including scaling and cropping.

To perform other actions, Drupal 7 had additional modules: ImageCache Actions and Textimage:

  • Placing overlays, for example for round corners
  • Adding a watermark, for example your logo
  • Placing text over image, for example your company name
  • Making images lighter/darker

In Drupal 8 these additional image features are now available in this module.

Download

Categories: FLOSS Project Planets

grep @ Savannah: grep-2.25 released [stable]

GNU Planet! - Fri, 2016-04-22 01:16
This is to announce grep-2.25, a stable release. Yet another bug-fix release. This one was prompted primarily by the realization that even with LC_ALL=C, grep-2.24 could still report "Binary file F matches". Special thanks to Paul Eggert for doing so much of the work, and to Assaf Gordon for his patch to make grep diagnose errors more precisely. There have been 15 commits by 2 people in the 6 weeks since 2.24. See the NEWS below for a brief summary. Jim [on behalf of the grep maintainers] ================================================================== Here is the GNU grep home page: http://gnu.org/s/grep/ For a summary of changes and contributors, see: http://git.sv.gnu.org/gitweb/?p=grep.git;a=shortlog;h=v2.25 or run this command from a git-cloned grep directory: git shortlog v2.24..v2.25 To summarize the 55 gnulib-related changes, run these commands from a git-cloned grep directory: git checkout v2.25 git submodule summary v2.24 Here are the compressed sources and a GPG detached signature[*]: http://ftp.gnu.org/gnu/grep/grep-2.25.tar.xz http://ftp.gnu.org/gnu/grep/grep-2.25.tar.xz.sig Use a mirror for higher download bandwidth: http://ftpmirror.gnu.org/grep/grep-2.25.tar.xz http://ftpmirror.gnu.org/grep/grep-2.25.tar.xz.sig [*] Use a .sig file to verify that the corresponding file (without the .sig suffix) is intact. First, be sure to download both the .sig file and the corresponding tarball. Then, run a command like this: gpg --verify grep-2.25.tar.xz.sig If that command fails because you don't have the required public key, then run this command to import it: gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000BEEEE and rerun the 'gpg --verify' command. This release was bootstrapped with the following tools: Autoconf 2.69.147-5ad35 Automake 1.99a Gnulib v0.1-752-gb7bc3c1 NEWS * Noteworthy changes in release 2.25 (2016-04-21) [stable] ** Bug fixes In the C or POSIX locale, grep now treats all bytes as valid characters even if the C runtime library says otherwise. The revised behavior is more compatible with the original intent of POSIX, and the next release of POSIX will likely make this official. [bug introduced in grep-2.23] grep -Pz no longer mistakenly diagnoses patterns like [^a] that use negated character classes. [bug introduced in grep-2.24] grep -oz now uses null bytes, not newlines, to terminate output lines. [bug introduced in grep-2.5] ** Improvements grep now outputs details more consistently when reporting a write error. E.g., "grep: write error: No space left on device" rather than just "grep: write error".
Categories: FLOSS Project Planets

Dan Crosta: Demystifying Logistic Regression

Planet Python - Fri, 2016-04-22 00:00

For our hackathon this week, I, along with several co-workers, decided to re-implement Vowpal Wabbit (aka "VW") in Go as a chance to learn more about how logistic regression, a common machine learning approach, works, and to gain some practical programming experience with Go.

Though our hackathon project focused on learning Go, in this post I want to spotlight logistic regression, which is far simpler in practice than I had previously thought. I'll use a very simple (perhaps simplistic?) implementation in pure Python to explain how to train and use a logistic regression model.

Predicting True or False

Logistic regression is a statistical learning technique useful for predicting the probability of a binary outcome -- true or false. As we've previously written, we use logistic regression with VW to predict the likelihood of a user clicking on a particular ad in a particular context, a true or false outcome.

Logistic regression, like many machine learning systems, learns from a "training set" of previous events, and tries to build predictions for new events it hasn't seen before. For instance, we train our click prediction model using several weeks of ad views (impressions) and ad clicks, and then use that to make predictions about new events to determine how likely the user is to click on an impression.

Each data point in the training set is further broken down into "features," attributes of the event that we want the model to learn from. For online advertising, common features include the size of the ad, the site on which the ad is being shown, the location of the user viewing the ad, and so on.

Logistic regression, or more accurately, Stochastic Gradient Descent, the algorithm that trains a logistic regression model, computes a weight to go along with each feature. The prediction is the sum of the products of each feature's value and each feature's weight, passed through the logistic function to "squash" the answer into a number in the range [0.0, 1.0].

The standard logistic function. Public Domain image from Wikipedia by Qef.

Applying a logistic regression model is therefore relatively easy, it's a simple loop to write. What surprised me most during the hackathon project is that training a logistic regression model, coming up with the values for the weights we multiply by the features, is also surprisingly concise and straightforward.

An Example

Let's consider a click data set. Each row in the table is an event we've (hypothetically) recorded, either an impression which did not get a click, or an impression plus click, along with a series of attributes about the event that we'll use to build our regression model.

Size Site Browser State EventType 160x600 wunderground.com IE AZ impression 160x600 ebay.com IE IA click 728x90 msn.com IE MS impression 160x600 ebay.com Chrome FL impression 728x90 popularscience.tv Chrome CA impression 320x50 weather.com Chrome NB impression 300x250 latimes.com Firefox CA impression 728x90 popularscience.tv IE OR impression 300x250 msn.com Chrome MI impression 728x90 dictionary.com Chrome NY impression 300x250 ebay.com Safari OH impression 320x50 msn.com Chrome FL click 728x90 dictionary.com Chrome NJ impression 728x90 urbanspoon.com Firefox OR impression 728x90 msn.com Chrome FL impression 728x90 latimes.com Chrome AZ impression 728x90 realtor.com Chrome CA impression 728x90 deviantart.com Chrome DC impression 728x90 weather.com IE OH click 728x90 msn.com IE AR impression 728x90 msn.com IE MN impression 300x250 wunderground.com Chrome MI impression 300x250 dictionary.com IE OK impression 300x250 popularscience.tv Chrome IN impression 300x250 dictionary.com Chrome MA impression 300x250 weather.com Chrome VT impression 320x50 ebay.com Chrome OH click 300x600 popularscience.tv IE NJ impression 728x90 weather.com IE WA impression 300x250 weather.com IE MO impression

These 30 data points will form our training data set. The model will learn from these examples what impact the attributes -- size, site, browser, and US state -- have on the likelihood of the user clicking on an ad that we show them.

In reality, to build a good predictive model you need many more events than the 30 that I am showing here -- we use tens of millions in our production modeling pipelines in order to get good performance and coverage of the variety of values that we see in practice.

To convert this data set to work with logistic regression, which computes a binary outcome, we'll treat each click as a target value of 1.0, also known as a "positive" event, and each impression as 0.0, or a "negative" event.

We also need to account for the fact that each of our features is a string, but logistic regression needs features that have values that it can multiply by its weight to compute the sum-of-products. Our features don't have numeric values, just some set of possibilities that each attribute can take on. For instance, the size attribute is one of three ad sizes we work with, 300x250, 728x90, or 160x600. Features that can take on some set of discrete values are called "categorical" features.

We could assign a numeric value to each of the possibilities, for instance use the value 1 when the size is 300x250, 2 when the size is 728x90, and 3 when the size is 160x600. This would work mechanically, but won't produce the outcome that we want. Since the weight of the feature "size" is multiplied by the value of the feature in each event, this would mean that we've decided, arbitrarily, that 160x600 is three times more "clicky" than 300x250, which the data set may not bear out. Moreover, for some attributes, like site, we have many many possible values, and determining what numeric value to assign to each of the possiblities is tedious, and will further amplify the problem as described here.

The proper solution to this problem is as effective as it is elegant: rather than having 1 feature, "site" with values like "ebay.com" or "msn.com", we create a much larger set of features, one for each site that we have seen in our training or evaluation data set. For a given training data example, if the "site" column has value "msn.com", we say that the feature "site:msn.com" has value 1, and that all other features in the "site:..." family have value 0. Essentially, we pivot the data by all the values of each categorical feature. Of course, the total number of features in our training set becomes very large, but, as we will see, SGD can accomodate this gracefully.

Stochastic Gradient Descent In Action

At the start of the hackathon, we wrote some very simple Python code to serve as a reference for how logistic regression should work. Python reads almost like pseudocode, so thats what we'll show here, too.

We decided to model examples as Python dictionaries, with a top-level field indicating the actual outcome (1.0 for clicks, 0.0 for impressions), and a nested dictionary of features. Here's what one looks like:

{ "actual": 1.0, "features": { "site": "ebay.com", "size": "160x600", "browser": "IE", "state": "IA", } }

Given an example and a dictionary of weights, we can predict the outcome straighforwardly:

def predict(weights, example): score = 0.0 for feature in extract_features(example): score += weights.get(feature, 0.0) score += weights.get("bias", 0.0) return 1.0 / (1 + math.exp(-score))

We initialize the score to 0, then for each feature in the example, we get the weight from the dictionary. If the weight doesn't exist (because the model hasn't been trained on any examples containing this feature), then we assume 0 as the weight, which leaves the score as-is.

Additionally, we add in a "bias" weight, which you can think of as a feature that every example has. The "bias" weight helps SGD correct for an overall skew in our training data set one way or another. It's possible to leave this out, but it's generally helpful to keep it.

The expression at the end of this function is the logistic function, as we saw above, which squashes the result into the range 0 to 1. We do this because we are trying to predict a binary outcome, so we want our prediction to be strictly between these two boundary values. If you do the math, you'll see that for an empty model -- one with nothing in the weights dictionary -- we will predict 0.5 for any input example.

Generating the features we use in SGD from our example is similarly straightforward:

def extract_features(example): return example["features"].iteritems()

Since our features are categorical, we can take advantage of Python's dictionary to generate (key, value) pairs as the actual features. Though all our examples have values for each of the four attributes we're considering, they each have different values -- thus the tuples generated by extract_features will be different for each example. This is our conversion from attributes with a set of possible values, to features which carry the implicit value 1.

Finally, we have enough code in place to see how stochastic gradient descent training actually works:

def train_on(examples, learning_rate=0.1): weights = {} for example in examples: prediction = predict(weights, example) actual = example["actual"] gradient = (actual - prediction) for feature in extract_features(example): current_weight = weights.get(feature, 0.0) weights[feature] = current_weight + learning_rate * gradient current_weight = weights.get("bias", 0.0) weights["bias"] = current_weight + learning_rate * gradient return weights

For each example in the training set, we first compute the prediction, and, along with the actual outcome, we compute the gradient. The gradient is a number between -1 and 1, and helps us adjust the weight for each feature, as we'll see in a moment.

Next, for each feature in the example, we update the weight for that feature by adding or subtracting a small amount from the current weight, the learning_rate. Here we can now see how the gradient works in action. Consider the case of a positive example (a click). The actual value will be 1.0, so if our prediction is high (close to 1.0), we'll adjust the weights up by a small amount; if the prediction is low, the gradient will be large, and we'll adjust the weights up by a correspondingly large amount. On the other hand, consider when the actual outcome is 0.0 for a negative event. If the prediction is high, the gradient will be close to -1, so we'll lower the weight by a relatively large amount; if the prediction is close to 0, we'll lower the weight by a relatively small amount.

For the first example, or more generally, the first time a given feature is seen, the feature weight is 0. But, since each example has a different set of features, over time the weights will diverge from one another, and converge on values that tend to minimize the difference between the actual and predicted value for each example.

Real-World Considerations

What I've shown here is a very basic implementation of logistic regression and stochastic gradient descent. It is enough to begin making some predictions from your data set, but probably not enough to use in a production system as is. But I've taken up enough of your time with a relatively long post, so the following features, then, are left as an excerise to the reader to implement:

  1. The train_on function assumes that one pass through the training data is sufficient to learn all that we can or need to. In practice, often several passes are necessary. Different strategies can be used to determine when the model has "learned enough," but most rely on having a separate testing data set, composed of real examples the model will not train on. The error on these examples is critical to understanding how the model will perform in the wild.
  2. As it is written here, we only support categorical features. But what if the data set has real-valued features, like the historical CTR of the site or the user viewing the impression? To support this example, we need to multiply the weight by the feature value in predict, and multiply the gradient by the feature value in train_on.
  3. This code doesn't perform any kind of regularization, a technique used to help combat over-fitting of the model to the training data. Generally speaking, regularization nudges the weights of all features closer to 0 every iteration, which over time has the effect of reducing the importance to the final prediction of infrequently-seen features.
Categories: FLOSS Project Planets

Matthew Garrett: Circumventing Ubuntu Snap confinement

Planet Debian - Thu, 2016-04-21 21:51
Ubuntu 16.04 was released today, with one of the highlights being the new Snap package format. Snaps are intended to make it easier to distribute applications for Ubuntu - they include their dependencies rather than relying on the archive, they can be updated on a schedule that's separate from the distribution itself and they're confined by a strong security policy that makes it impossible for an app to steal your data.

At least, that's what Canonical assert. It's true in a sense - if you're using Snap packages on Mir (ie, Ubuntu mobile) then there's a genuine improvement in security. But if you're using X11 (ie, Ubuntu desktop) it's horribly, awfully misleading. Any Snap package you install is completely capable of copying all your private data to wherever it wants with very little difficulty.

The problem here is the X11 windowing system. X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window. An application that has no access to any of your private data can wait until your session is idle, open an unconfined terminal and then use curl to send your data to a remote site. As long as Ubuntu desktop still uses X11, the Snap format provides you with very little meaningful security. Mir and Wayland both fix this, which is why Wayland is a prerequisite for the sandboxed xdg-app design.

I've produced a quick proof of concept of this. Grab XEvilTeddy from git, install Snapcraft (it's in 16.04), snapcraft snap, sudo snap install xevilteddy*.snap, /snap/bin/xevilteddy.xteddy . An adorable teddy bear! How cute. Now open Firefox and start typing, then check back in your terminal window. Oh no! All my secrets. Open another terminal window and give it focus. Oh no! An injected command that could instead have been a curl session that uploaded your private SSH keys to somewhere that's not going to respect your privacy.

The Snap format provides a lot of underlying technology that is a great step towards being able to protect systems against untrustworthy third-party applications, and once Ubuntu shifts to using Mir by default it'll be much better than the status quo. But right now the protections it provides are easily circumvented, and it's disingenuous to claim that it currently gives desktop users any real security.

comments
Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20160422 ('PanamaPapers') released

GNU Planet! - Thu, 2016-04-21 18:51

GNU Parallel 20160422 ('PanamaPapers') has been released. It is available for download at: http://ftp.gnu.org/gnu/parallel/

Haiku of the month:

xapply too strict?
:::+
is just made for you
-- Ole Tange

New in this release:

  • :::+ and ::::+ work like ::: and :::: but links this input source to the previous input source in a --xapply fashion. Contrary to --xapply values do not wrap: The shortest input source determines the length.
  • --line-buffer --keep-order now outputs continuously from the oldest job still running. This is more what you would expect than the earlier behaviour where --keep-order had no effect with --line-buffer.
  • env_parallel supports tcsh, csh, pdksh. In fish it now supports arrays. In csh/tcsh it now supports variables, aliases, and arrays with no special chars. In pdksh it supports aliases, functions, variables, and arrays.
  • Function exporting on Mac OS X works around old Bash version.
  • Better CPU detection on OpenIndiana.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://www.gnu.org/s/parallel/merchandise.html
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --bibtex)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets

kver’s definition of anarchy

Planet KDE - Thu, 2016-04-21 18:12

This amused me

Possible differences/anarchy may include:

  • Strange chewing noises if inserting a disc.
  • Never finding the correct orientation of USB plugs.
  • Your machine mysteriously moving several inches if you leave the room briefly.
  • Your printer demanding white ink to operate.
  • Small house pets may go missing, tufts of fur in keyboard.
  • Growling noises if holding the power button down.
  • Kate highlighting the PHP ‘die’ command in bold, regardless of setting.
by
Categories: FLOSS Project Planets

Lullabot: “One Weird Trick” for Drupal Security... or Something

Planet Drupal - Thu, 2016-04-21 16:00
Matt & Mike talk with Drupal Security Team Lead Michael Hess, along with the former lead Greg Knaddison. We talk about the current state of Drupal security, along with their Drupalcon session, "Watch the Hacker Hack".
Categories: FLOSS Project Planets

Mike C. Fletcher: mcastsocket broken out into its own project

Planet Python - Thu, 2016-04-21 15:26

I use multicast a lot in my work, and I almost always wind up using my branch of PyZeroconf's mcastsocket module... and that's not cleanly pip installable, so I've now broken out the mcastsocket module into its own project. Changes with this release:

  • implements (IPv4-only) single-source-multicast support
  • should work cleanly on python 3.5 (it may have worked on 3.5 before, I honestly have no idea)
  • has Travis CI/Tox tests (though only a tiny set of tests)

Multicast away.

Categories: FLOSS Project Planets

Palantir: What DrupalCon Means to Us

Planet Drupal - Thu, 2016-04-21 14:48

Each and every year we pack up our booth, swag, and people, and make the pilgrimage to a location somewhere in the US for DrupalCon North America. It's always an amazing week, filled with knowledge, sharing, fun, and, of course, business. This year it's in New Orleans the week of May 9th, and, per usual, we have a lot going on so we'd like to share a few highlights.

Sessions

We have a few sessions lead by our team this year, and we'd love to see you there! Better still, stop by our booth (no. 222) before or after each to talk more. We're an open book, and want to share.

D8 Module Acceleration Program (Workbench)

Ken Rickard
Ken is part of a D8 Module Acceleration Program panel for Workbench on Tuesday, May 10th at 2:15 pm in room no. 279.

Finding Your Purpose as a Drupal Agency

George DeMet
Founder and CEO George DeMet will be giving a talk on Finding Your Purpose as a Drupal Agency on Wednesday, May 11th at 4:45pm in room no. 262.

Booth

We'll be at booth no. 222 all week, and we'd like you to think of it as your basecamp. As you walk around the conference and attend sessions, if a question comes to mind or you need some clarification on some aspect of Drupal, web strategy, design, or development, please do come by and let's talk. And we mean it; we have couches...

Sponsorship

We always gladly sponsor both the Drupal Association and DrupalCon. It's good for the community, and it's good for our clients. Why? Because of the nature of the open source community, and the sheer amount of knowledge share to solve common problems as it relates to the web. We utilize and pass on this knowledge to keep the cycle going.

Trivia Night

Speaking of sponsorship, we've sponsored Trivia Night at DrupalCon for the last few years, and it's always a fantastic night of fun and Drupal nerdiness! We're proud sponsors again this year, so we hope you can join us for a night filled with trivia, hilarity, and plenty of prizes.

Thursday, May 12
9:00pm - 12:00am (doors at 8:00pm)
U.S. Freedom Pavilion at National World War II Museum
1043 Magazine Street, New Orleans, LA
Free to attend

Related content Everything you Wanted to Know About DrupalCon

DrupalCon is just a few weeks away in New Orleans, so our Account Manager Allison Manley is joined by our CEO and Founder George DeMet, Drupal veteran and PHP guru Larry "Crell" Garfield, and Senior Front-End Developer Lauren Byrwa on our podcast this time around. They share thoughts about the conference generally, what they're excited about specifically, and what they're expected from the Driesnote, among other topics.

Planning for Long Term Success on the Web

Designing, architecting, and building enterprise-level Web projects can feel like a Moonshot. They're often expensive, complex, and time-consuming undertakings that require the long-term commitment and dedication of the entire company or organization. In this way, many of the lessons of the Apollo program are directly applicable to the work that we undertake with our customers every day at Palantir.

Project Management: The Musical!

Description

Attending the conference? We'd love to say hello. Let's schedule a time to meet at DrupalCon.

Categories: FLOSS Project Planets

Mediacurrent: What it really means to run a project the “Agile Scrum” way

Planet Drupal - Thu, 2016-04-21 14:36

 If you were to search “What is the Agile Scrum Methodology of project management?” you’d find this:

“…an alternative to traditional project management, typically used in software development. It helps teams respond to unpredictability through incremental, iterative work cadences, known as sprints. Scrum is one framework used to run a project using the Agile Methodology.”

Categories: FLOSS Project Planets

Chapter Three: More Drupal 8 Sites in the Wild

Planet Drupal - Thu, 2016-04-21 14:11

Its been a big week for Drupal 8 here at Chapter Three.

As mentioned previously, at the beginning of the year Chapter Three made the decision to build all new projects with Drupal 8. Knowing we had the help and Drupal 8 expertise of Alex and Daniel to back us up if we encountered any errors or incomplete contrib ports gave us the confidence to leave Drupal 7 in the dust. Having already had a few successful client projects on 8 last year let me know I wouldn't have a developer revolt on my hands as a result of that decision.

The fruits of that decision are starting to ripen.

Categories: FLOSS Project Planets

Metal Toad: What I Learned Today: Check Your Default Google Analytics Settings

Planet Drupal - Thu, 2016-04-21 13:48
What I Learned Today: Check Your Default Google Analytics Settings April 21st, 2016 Jonathan Jordan Google Analytics Module Settings

Drupal's Google Analytics Module is great. There are a few settings though that I recently found out you'll want to pay closer attention to. First is the "Pages" section of the configuration form, which allows you to only include/exclude Google Analytics tracking code on certain pages. The default settings are to exclude the code on the following pages.

admin admin/* batch node/add* node/*/* user/*/*

The other setting to keep in mind is in the "Privacy" section of the configuration form. In particular the "Universal web tracking opt-out" setting which is enabled by default. Those are pretty good defaults for most sites, but if you aren't aware of them, they can also cause you to lose some valuable analytics data.

The "Universal web tracking opt-out" setting allows users to setup their browsers to send a Do Not Track header. The Google Analytics module respects this header if the "Universal web tracking opt-out" setting is enabled. That is unless page caching is enabled. If the page is cached, then Google Analytics is always included in the cached page. So on production sites, this usually means that tracking is enabled except if a user is logged in and they setup their browser to send the Do Not Track header. Again, this seems like the reasonable and respectful default.

Introduce Custom Panel Pages Into the Mix

Now consider you have a Custom Panel page with the url: "node/%/my-sub-page", which is a special sub page for a particular content type on your site. The contents of the panel page doesn't matter much, what matters is that if you are using the default Google Analytics settings for googleanalytics_pages, this page will not have Google Analytics tracking on it. The fix is fairly simple, update your Google Analytics settings and replace "node/*/*" with node/*/edit, and maybe a few others that you might not care about (ie: node/*/devel, node/*/nodequeue if you use those modules).

This post is part of my challenge to never stop learning.
Categories: FLOSS Project Planets

Kubuntu 16.04 LTS Release Anouncement

Planet KDE - Thu, 2016-04-21 12:45

The Kubuntu team is excited and delighted to announce the release of Kubuntu 16.04 Xenial Xerus.

This is the first release from the team since we became a completely volunteer group, just after the release of 15.10. Delivering an Long-Term Release (LTS) release is a superb achievement, and testimony to our community’s commitment to Ubuntu and KDE.

Beta-tester feedback has been resounding and positive. This confirms the amazing work that is being undertaken by our upstream KDE community. Plasma 5, KDE Frameworks 5 and all of KDE continue to demonstrate how Free/Libre Open Source Software sets world class standards for innovation, usability and integration.

What can you expect from this latest release?

Our new software center: Plasma Discover brim-full of software to choose from.

The latest KDE PIM with lots of features and fixes Including the latest Akonadi support and integration with MySQL 5.7.

Plasma 5, the next generation of KDE’s desktop has been rewritten to make it smoother to use while retaining the familiar setup.

Kubuntu 16.04 comes with KDE Applications 15.12 containing all your favourite apps from KDE, including Dolphin. Even more applications have been ported to KDE Frameworks 5 but those which aren’t, should fit in seamlessly. For a complete desktop suite of applications we’ve included some non-KDE applications such as LibreOffice 5.1 and Firefox 45.

Keen developer types and friends of Muon Package Manager will be delighted to know that the project has got a new team of maintainers, and a new release just in time for this Kubuntu LTS.

What are you waiting for? Download the ISO from our downloads section, or Upgrade now!

The Kubuntu Podcast team will be reviewing the latest release, and discussing feedback from the community on the next show. More details available in the podcast section.

Please also check out our release notes.

Categories: FLOSS Project Planets

Phponwebsites: Redirect users after login in Drupal 7

Planet Drupal - Thu, 2016-04-21 12:42
    This blog describes about how to redirect users after logged into a site in Drupal 7. By default, Drupal redirects users to user page after logged into a site.  Suppose you want to redirect users into any other pages as you want. Then you can done that in Drupal 7.
  You can redirect users after login in Drupal using the following two ways:
1. Redirect users after logged into a site using hook_user_login()
2. Redirect users after logged into a site using custom form submit



Redirect users after logged into a site using hook_user_login:
     Drupal provides hook called hook_user_login to make changes while user login successfully. Let see the below code.

/**
 * Implement hook_user_login()
 */
function phponwebsites_user_login(&$form, &$form_state) {
 //add page here to where you want redirect users after login
  $form['redirect'] = '<front>';
}
    Now you can check whether you redirect to front page or not after login. Now  Drupal will be redirect you to front page.

Redirect users after logged into a site using custom form submit:
   Drupal have alternate method to redirect users after login. Ie, You need to add custom form submit handler to a form using hook_form_alter(). Then add a page to redirect users in that custom form submit handler in Drupal 7. Let see the below code.


/**
 * Implement hook_form_alter().
 */
function phponwebsites_form_alter(&$form, &$form_state, $form_id) {
  if ($form_id == "user_login" || $form_id == "user_login_block") {
    $form['#submit'][] = 'phponwebsites_custom_login_submit';
  }
}  
function phponwebsites_custom_login_submit(&$form, &$form_state) {
  //page to be redirect
  $form['redirect'] = '<front>';
}

Now you will be redirect to front page after logged into a drupal site. Now I’ve hope you should know how to redirect users after logged into a site in Drupal 7.

Related articles:
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
Categories: FLOSS Project Planets

Pronovix: Documenting APIs mini-conference: Video recordings available!

Planet Drupal - Thu, 2016-04-21 11:41

The video recordings of the lectures held on the Documenting APIs mini-conference are now available on our dedicated event page.

On March 4, 2016 we helped organise a special whole-day meetup in London on the Write The Docs conference, in which the community discussed the tools and processes used to document APIs.

Categories: FLOSS Project Planets
Syndicate content