On the practicality of regex for email address processing

A coleague recently pointed me to an blog post: On the Futility of Email Regex Validation. For the sake of brevity I will refer to it as Futility in this article.

I admit that while the challenge of writing a regex that can successfully identify whether a string conforms to the RFC 5322 definition of an Internet Message header is an entertaining challenge, Futility is not a useful guide for the practical programmer.

This is because it conflates RFC 5322 message headers with RFC 5321 address literals; which in simple language means that what constitutes a valid SMTP email address is different from what constitutes a valid message header in general. It is also because it incites the reader to become preoccupied with edge cases that are theoretically possible from a standards point of view, but which I will demonstrate have an infinitesimal probability of occurring “in the wild”.

This article will expand upon both of these assertions, will discuss a few possible use cases for email regex, and will concluded with annotated “cookbook” examples of practical email regex.

RFC 5321 supersedes 5322

The universality of SMTP for the transmission of email means that as a practical matter, no examination of email address formatting is complete without a close reading of the relevant IETF RFC, which is 5321.

5322 considers email addresses as simply a generic message header with no special case rules applying to it. This means that comments enclosed in parenthesis are valid, even in a domain name.

The test suite referenced in Futility includes 10 tests which contain comments, or diacritical or unicode characters and indicates that 8 of them represent valid email addresses. This is incorrect because RFC 5321 is explicit in stating that the domain name portions of email addresses “are restricted for SMTP purposes to consist of a sequence of letters, digits, and hyphens drawn from the ASCII character set.”

In the context of constructing a regular expression, it is hard to overstate the degree to which this constraint simplifies matters, especially as regards determining excessive string length. The annotation of the examples will highlight this below. It also implies some other practical considerations in the context of validation that we will explore further on.

Mailbox names in the wild

As per both RFCs, the technical name for the portion of the email address to the left of the “@“ symbol is “mailbox”. Both RFCs allow considerable latitude in what characters are allowable in the mailbox portion. The only significant practical constraint is that quotes or parenthesis must be balanced, something that is a real challenge to verify in vanilla regex.

However real world mailbox implementations are again the measure which the practical programmer should employ. As a rule, the people who pay us frown upon 90% of our billable hours being directed to resolving the 10% of theoretical edge cases that might possibly not even exist at all in real life.

Let’s look at the dominant email mailbox providers, consumer and business, and consider what types of email addresses they permit.

For consumer email, I did some primary research, using a list of 5,280,739 email addresses that were leaked from Twitter accounts. Based on 115 million twitter accounts, this gives us a 99% confidence level with a 0.055% margin of error for the entire population of Twitter, which would be very representative of the general population of all Internet email addresses. Here is what I learned:

  • 82% of addresses contained only ascii alphanumeric characters,
  • 15% contained only ascii alphanumeric and dots (ascii periods), for 97% of all addresses,
  • 3% contained only ascii alphanumeric, dots, and dashes, for a nominal 100% of email addresses.

However, this is a a rounded 100%. For the trivia lovers out there, I also found:

  • 38 addresses with underscores for 0.00072% of the total
  • 27 with plus signs for 0.00051%, and
  • 1 address with unicode characters representing 0.00002% of the total.

The net effect is that assuming email address mailboxes contain only ascii alphanumeric, dots, and dashes will give you better than 5 9’s accuracy for consumer emails.

For business emails, Datanyze reports that 6,771,269 companies use 91 different email hosting solutions. However the Pareto distribution holds, and 95.19% of those mailboxes are hosted by just 10 service providers.

Gmail for business (34.35% market share)

Google allows only ascii letters, numbers, and dots when creating a mailbox. It will however accept the plus sign when receiving email.

Microsoft Exchange Online (33.60%)

Allows only ascii letters, numbers, and dots.

GoDaddy Email Hosting (14.71%)

Uses Microsoft 365, allows only ascii letters, numbers, and dots.

7 additional providers (12.53%)

Not documented.

Unfortunately we can only be certain of 82% of businesses and we do to know how many mailboxes that represents. However we do know that of the Twitter email addresses, only 400 out of 173,467 domains had more than 100 individual email mailboxes represented. I believe that most of the 99% of remaining domains were business email addresses . In terms of mailbox naming policies at the server or domain level, I propose that it is reasonable to take these 237,592 email addresses as representing a population of 1 billion business email addresses with 99% confidence level and 0.25% margin of error, giving us close to 3 9’s when assuming that an email address mailbox contains only ascii alphanumeric, dots, and dashes.

Use cases

Again, with practicality foremost in our minds, let us consider under what circumstances we might need to programmatically identify a valid email address.

New account creation/user signups

In this use case, a prospective new customer is trying to create an account. There are two high-level strategies we might consider. In the first case we attempt to verify that the email address that the new user provides is valid, and proceed with account creation synchronously. There are two reasons why you might not want to take this approach. The first is that although you might be able to validate that the email address has a valid form, it might nonetheless not exist. The other reason is that at any kind of scale, synchronous is a red flag word, which should cause the pragmatic programmer to consider instead a fire and forget model where a stateless web front end passes form information to a microservice or API which will asynchronously validate the email by sending a unique link which will trigger the completion of the account creation process.

Contact forms

In the case of a simple contact form, of the sort often used to download white papers, the potential downside to accepting strings that look like a valid email but are not, is that you are lowering the quality of your marketing database by failing to validate if the email address really exists. So once again, the fire and forget model is a better option than programatic validation of the string entered in a form.

Parsing of referrer logs and other large volumes of data.

Which leads us to the real use case for programatic email address identification in general, and regex in particular: anonymizing or mining large chunks of unstructured text.

I first came across this use case assisting a security researcher who needed to upload referrer logs to a fraud detection database. The referrer logs contained email addresses that needed to be anonymized before leaving the company’s walled garden.

These were files with hundreds of millions of lines, and there were hundreds of files a day. “Lines” could be close to a thousand characters long. Iterating through the characters in a line, applying complex tests (e.g. is this the first occurrence of @ in the line and is it part of a file name such as imagefile@2x.png?) using loops and standard string functions would have created a time complexity that was impossibly large. In fact the in-house development team of this (very large) company had declared it an impossible task.

I wrote the following compiled regex:

search_pattern = re.compile("[a-zA-Z0-9\!\#\$\%\'\*\+\-\^\_\`\{\|\}\~\.]+@|\%40(?!(\w+\.)**(jpg|png))(([\w\-]+\.)+([\w\-]+)))")

and dropped it into the following Python list comprehension:

results = [(re.sub(search_pattern, "redacted@example.com", line)) for line in file]

I cannot remember just how fast it was, but it was fast. My friend could run it on a laptop and be done in minutes. It was accurate. We clocked it at 5 9’s looking at both false negatives and false positives.

My job was made somewhat easy by the fact as referrer logs, they could only contain URL “legal” characters, so I was able to map out any collisions which I documented in the repo readme. Also I could have made it even simpler (and faster) if I had performed the email address analysis and learned with assurance that all that was needed to get to the 5 9’s target was ascii alphanumeric, dots, and dashes. Nonetheless this is a good example of practicality and scoping the solution to fit the actual problem to be solved. One of the greatest quotes in all of programming lore and history is the great Ward Cunningham’s admonition to take a second to remember exactly what you are trying to accomplish, and then ask yourself “What is the simplest thing that could possibly work?”

In the use case of parsing out (and optionally transforming) an email address from a large amount of unstructured text this solution was definitely the simplest thing I could think of.

Annotated cookbook

Like I said at the beginning, I found the idea of building an RFC 5322 compliant regex amusing, so I will I show you composable chunks of regex to deal with various aspects of the standard and explain how the regex polices that. At the end I will show you what it looks like all assembled.

The structure of an email address is:

  1. The mailbox
    1. Legal characters
    2. Single dots (double dots are not legal)
    3. Folded White Space (RFC 5322 craziness)
    4. (A complete regex solution would also include balanced parenthesis and/or quotes, but I do not have that yet. And very possibly never will.)
  2. The delimiter (@)
  3. The domain name
    1. Standard dns parsable domains
    2. IPv4 address literals
    3. IPv6 address literals
      1. IPv6-full
      2. IPv6-comp (for compressed)
        1. 1st form (2+ 16-bit groups of zero in the middle)
        2. 2nd form (2+ 16-bit groups of zero in the beginning)
        3. 3rd form (2 16-bit groups of zero at the end)
        4. 4th form (8 16-bit groups of zero)
      3. IPv6v4-full
      4. IPv6v4-comp (compressed)
        1. 1st form
        2. 2nd form
        3. 3rd form
        4. 4th form

Now for the regex.



First we have ^ which “anchors” the first character at the beginning of the string. This is to be used if validating a string that is supposed to contain nothing but a valid email It makes sure that the first character is legal. If the use case is instead to find an email in a longer string, omit the anchor.

Next we have (?<mailbox>. This names the capture group for convenience. Inside the captured group are the three regex chunks separated by the alternate match symbol | which means that a character can match any one of the three expressions. Part of writing good (performant and predictable) regex is to make sure that the three expressions are mutually exclusive. That is to say that a substring that matches one, will definitely not match either of the other two. To do this we use specific character classes instead of the dreaded .*.

Unconditionally legal characters


The first alternate match is a character class is enclosed in square brackets, which captures all ascii characters that are legal in an email mailbox except the dot, “folded white space”, the double quote, and the parenthesis. The reason why we excluded them is that they are only conditionally legal, which is to say that there are rules about how you can use them that have to be validated. We handle them in the next 2 alternate matches.



The first such rule concerns the dot (period). In a mailbox, the dot is only allowed as a separator between two strings of legal characters, so two consecutive dots is not legal. To prevent a match if there are two consecutive dots, we use the regex negative lookbehind (?<!\.) which specifies that the next character (a dot) will not match if there is a dot preceding it. Regex look arounds can be chained. There is another negative lookbehind before we get to the dot (?!^) which enforces the rule that the dot cannot be the first character of the mailbox. After the dot, there is a negative lookahead (?!\.), this prevents a dot being matched if it is immediately followed by a dot.



This some RFC 5322 nonsense about allowing multi-line headers in messages. I am ready to bet that in the history of email addresses there has never been anyone who seriously created an address with a multiline mailbox (they might have done it as a joke). But I am playing the 5322 game so here it is, the string of unicode characters that creates the Folded White Space as an alternate match.

Balanced double quotes and parenthesis

Both RFC allow for the use of double quotes as a way of enclosing (or escaping) characters that would normally be illegal. They also allow for enclosing comments in parenthesis so that they will be human readable, but not considered by the mail transfer agent (MTA) when interpreting the address. In both cases the characters are only legal if balanced. This means that there has to be a pair of characters, one the opens and one that closes.

I am tempted to write that I have discovered a demonstrationem mirabilem, however this probably only works posthumously. The truth is that this is non-trivial in vanilla regex. I have an intuition that the recursive nature of “greedy” regex might be exploited to advantage, however I am unlikely to devote the time necessary to attack this problem for the next few years, and so in the very best tradition, I leave it as an exercise for the reader.

Mailbox length


Something that actually does matter is the maximum length of a mailbox: 64 characters. So after we close the mailbox capture group with a final closing parenthesis, we use a quantifer between curly braces to specify that we must match any of our alternates at least one time and no more that 64 times.



The delimiter chunk starts off with the special case \s? because according to Futility a space is legal just before the delimiter and I am just taking their word for it. The rest of the capture group follows a similar pattern as singleDot, it will not match if preceded by a dot or a dash or if followed immediately by another @.

Domain name

Here, as in the mailbox, we have 3 alternate matches. And the last of these has nested in it another 4 alternate matches.

Standard dns parsable


This will not pass several of the tests in Futility, but as mentioned earlier, it complies strictly with RFC 5321 which has the final word.



There is not too much to say about this. This is a well-known and easily available regex for IPv4 addresses.



I was unable to find a good regular expression for IPv6 (and IPv6v4) addresses, so I wrote my own, carefully following the Backus/Naur notated rules from RFC 5321. I will not annotate every sub group of the IPv6 regex, but I have named every subgroup to make it easy to pick apart and see what is going on. Nothing too interesting really except maybe the way I combined greedy matching on the “left” side and non-greedy on the “right” in the IUPv6Comp1 capture group.

The full monty

I have saved the final regex, along with the test data from Futility, and enhanced by some IPv6 test cases of my own, to Regex101. I hope you have enjoyed this article and that it proves to be useful and a time saver for many of you.


Interpreted, compiled. what. ever.

CPUs speak only one language; they don’t care what language you write in

Programmers tend to make a big deal over the supposed difference between compiled languages and interpreted ones. Or dynamic languages vs. statically typed languages.

The conventional wisdom goes like this: A compiled language is stored in machine code and is executed by the CPU with no delay, an interpreted language is converted to machine language one instruction at a time, which makes it run slowly. Dynamic languages are slower because of the overhead of figuring out type at runtime.

The reality is nothing like this

A long long time ago, when programmers wrote to the bare metal and compilers were unsophisticated, this simplistic view may have been somewhat true. But there are two things that make this this patently false today.

The first thing is that all popular programming languages these days — Java, JavaScript, Python, Scala, Kotlin, .NET, Ruby — all run on virtual machines. The virtual machines themselves are written in a variety of languages*.

The second thing is that VMs make it easier to easier to observe the program’s execution, which in turn makes JIT (Just in Time) compilation possible.

So how about a real example that illustrates why this matters: Java.

Java is compiled. Or is it? Well… sort of not really. Yes there is a compiler that takes your source and creates Java bytecode, but do you know what the JVM does with that bytecode as soon as it gets it?

First it build a tree. It disassembles the Java bytecode and builds a semantic tree so that it can figure out what the source code was trying to do. In order to accomplish this it must undo all of the snazzy optimizations that the compiler has so carefully figured out. It throws away the compiled code.

That sounds crazy! So why does it do this? The answer to that question is the key to understanding the title of this article.

The best way to understand code is to watch it running

This applies to humans, but it applies just as well to compilers. The reason why the modern JVM undoes the compilation and the optimizations is that “conventionally” compiled Java bytecode runs too slowly on the JVM. To attain the speed of execution for which Java is known these days, the JIT has to throw away the code that was statically compiled (and statically optimized) and “watch” the code running in JVM, and make optimizations based on the code’s actual behaviour at run time**.

Don’t go around saying things like “[insert language name here]is too slow because it is interpreted” because that is simply not true. Languages are just syntax and can have many implementations (for example there are several implementations of Java). You could say “[insert language name here] is slow because the interpreter/VM doesn’t have JIT” and that might be true.

Languages are not fast or slow

C++ is not a faster language. C++ runs fast simply because more hours have been spent on the compiler optimizations than for any other language, so of course they are better. It is worth noting that programs compiled using PGC++ and Clang are regularly 2 or 3 times faster than the same source code compiled using the AOCC compiler. This is proof that it is the compiler and its optimizations — not the language itself — that dramatically affect execution performance.

Java is generally considered next fastest, and that is because it has had more hours invested in its JIT compiler than anything except C/C++.

Framework or Language redux

But it is not all down to the compiler. I have already writtenabout the dangers of unexamined libraries and frameworks. The article could also have been titled Syntax or Library: Do you know which one you are using? I asked a trusted friendto review this article, and brought up the very good point that memory access patterns are the big performance culprit overall. He made the point that C programs benefit from the fact that…

C only has primitive arrays and structs (but it’s very liberal with pointers so you can pretty much wrangle anything into these primitive structures with enough patience). Working with hashmaps or trees can be very painful, so you avoid those. This has the advantage of nudging you towards data layouts that make effective use of memory and caches. There’s also no closures and dynamically created functions, so your code and data mostly stay in predictable places in memory.

Then look at something like Ruby, where the language design encourages maximum dynamism. Everything tends to be behind a hashmap lookup, even calling a fixed unchanging method on an object (because someone might have replaced that method with another since you last called it). Anything can get moved or wrapped in yet another anonymous closure. This creates memory access patterns with a scattered “mosaic” of little objects scattered all over the place, and the code spends its time hunting each piece of the puzzle from memory which then points to the next one.

In short, C encourages very predictable memory layouts, while Ruby encourages very unpredictable memory layouts. An optimizing compiler can’t do much to fix this.

I had to agree. Which led me to articulate my point thusly: Programmers who do not understand where syntax stops and libraries begin are doomed to write programs whose execution they do not really understand.

My belief is that it is more difficult to write a truly awful C program because (if it runs at all) it would be too much work to manually reproduce the memory chaos that Ruby so casually produces.

We then had an interesting chat about how a certain large tech company created a “cleaned-up PHP++”. He has some interesting things to say, maybe he will write an article about that.

Thank you for your help Pauli.

So an implicit part of my contention is that modern programming languages lower the bar so that many programmers do not think about about basic computer science (memory structures and computational complexity), and therefore have no basis upon which to understand how their programs will execute.

The other part of my contention is that any Turing complete language could run about as quickly as any other when considered from a pure syntax perspective. For example I believe that it would absolutely possible to create a high performance implementation of Ruby on let’s say the JVM. I readily acknowledge that most current Ruby code would break on that system, but that is as a result of the programming choices made in the standard libraries, not as a fundamental constraint of the language (syntax) itself.

I have say, (as a self-admitted cargo cult programmer) that it is definitely possible that I just don’t understand Ruby syntax and/or the Church–Turing thesis.

CPUs (or VMs) speak only one language

Ruby and its shotgun approach to memory management notwithstanding; any programming language that would have as many hours invested in optimizing its compilation as Java or C, would run just as fast as Java or C and this is because CPUs (and VMs) speak only one language: machine language. No matter what language you write in, sooner or later it gets compiled to machine language and the things that affect performance are how fundamental computer science principles are implemented in the standard libraries, and how effectivethe compilation is, notwhen it happens.

The moral of the story is that programmers should spend less time crushing out on languages and more time understanding how they work under the hood.

I will finish with a quote from the great Sussmann:

“…computers are never large enough or fast enough. Each breakthrough in hardware technology leads to more massive programming enterprises, new organizational principles, and an enrichment of abstract models. Every reader should ask himself periodically ‘‘Toward what end, toward what end?’’ — but do not ask it too often lest you pass up the fun of programming for the constipation of bittersweet philosophy.”

* The original Sun JVM was written in C. The original IBM JVM was written in SmallTalk. Other JVMs have been written in C++. 
The Java API (class libraries without which most Java programmers would unable to make even a simple application) are written in Java itself for the most part.

** It is worth noting that this happens again when the machine language of the VM “hits” the hardware CPU, which immediately takes a group of instructions, breaks them apart, looks for patterns it can optimize, and then rebuilds it all so it can pipeline microcode.