The full end is simple 4 letters. Fun facts about sex that no one told you before. Avoid type conversion in PHP

Hello. My name is Sasha Barannik. At Mail.Ru Group I head the web development department, consisting of 15 employees. We have learned how to create websites for tens of millions of users and can easily cope with several million daily audiences. I myself have been doing web development for about 20 years, and for the last 15 years I have been programming primarily in PHP for my work. Although the capabilities of the language and the approach to development have changed greatly over this time, understanding the main vulnerabilities and the ability to protect against them remain key skills for any developer.

You can find many safety articles and guides on the Internet. I found this book to be quite detailed, yet concise and understandable. I hope it will help you learn something new and make your sites more reliable and secure.

P.S. The book is long, so the translation will be published in several articles. So let's get started...

Another book on PHP security? There are many ways to start a book about PHP security. Unfortunately, I haven't read any of them, so I'll have to figure that out as I write. Perhaps I will start with the most basic and hope that everything will work out.

If we consider an abstract web application launched online by Company X, we can assume that it contains a number of components that, if hacked, can cause significant harm. Which one, for example?

  • Harm to users: gaining access to email, passwords, personal data, bank card details, business secrets, contact lists, transaction history and deeply guarded secrets (like someone naming their dog Sparkle). Leaking this data harms users (individuals and companies). Web applications that misuse such data and hosts that take advantage of user trust can also cause harm.
  • Harm to company X itself: due to the damage caused to users, the reputation deteriorates, compensation has to be paid, important business information is lost, additional costs arise - for infrastructure, security improvements, liquidation of consequences, legal costs, large benefits for dismissed top managers, etc.
  • I'll focus on these two categories because they include most of the troubles that web application security should prevent. All companies that have experienced serious security breaches are quick to write in press releases and on their websites how sensitive they are to it. So I advise you to feel the importance of this problem with all your heart before you encounter it in practice.

    Unfortunately, security issues are very often resolved after the fact. It is believed that the most important thing is to create a working application that meets the needs of users, within an acceptable budget and time frame. It's an understandable set of priorities, but security can't be ignored forever. It is much better to keep it in mind constantly, implementing specific solutions during development, when the cost of changes is still small.

    The secondary nature of security is largely the result of programming culture. Some programmers break out in a cold sweat at the thought of a vulnerability, while others may dispute the existence of a vulnerability until they can prove that it is not a vulnerability at all. Between these two extremes, there are many programmers who will simply shrug their shoulders because things haven't gone wrong for them yet. It is difficult for them to understand this strange world.

    Because web application security must protect users who trust the application's services, it is necessary to know the answers to the following questions:

  • Who wants to attack us?
  • How can they attack us?
  • How can we stop them?
  • Who wants to attack us? The answer to the first question is very simple: everything and everything. Yes, the whole Universe wants to teach you a lesson. A guy with an overclocked computer running Kali Linux? He's probably already attacked you. A suspicious man who likes to put a spoke in people's wheels? He's probably already hired someone to attack you. A trusted REST API that feeds you data hourly? It was probably hacked a month ago to feed you infected data. Even I can attack you! So you don't need to blindly believe this book. Consider me lying. And find a programmer who will bring me to light and expose my harmful advice. On the other hand, maybe he is also going to hack you...

    The point of this paranoia is to make it easier to mentally categorize everything that interacts with your web application (“User”, “Hacker”, “Database”, “Untrusted Input”, “Manager”, “REST API”). , and then assign each category a trust index. Obviously, the “Hacker” cannot be trusted, but what about the “Database”? “Untrusted input” got its name for a reason, but would you really filter a blog post from your colleague's trusted Atom feed?

    Those who are serious about hacking web applications learn to take advantage of this thinking, often attacking trusted data sources rather than vulnerable ones, which are less likely to have good security in place. This is not a random decision: in real life, subjects with a higher trust index are less suspicious. It is these data sources that I first pay attention to when analyzing an application.

    Let's return to “Databases”. Assuming that a hacker can gain access to the database (and we paranoids always assume this), then it can never be trusted. Most applications trust the databases without any questions. From the outside, the web application looks like a single whole, but inside it is a system of individual components that exchange data. If we consider all these components to be trusted, then if one of them is hacked, all the others will quickly be compromised. Such catastrophic security problems cannot be solved with the phrase “If the database is hacked, then we still lose.” You can say so, but it is not at all a fact that you will have to do this if you do not initially trust the base and act accordingly!

    How can they attack us? The answer to the second question is a fairly extensive list. You can be attacked from wherever each component or layer of a web application receives data. Essentially, web applications simply process data and move it from place to place. User requests, databases, APIs, blog feeds, forms, cookies, repositories, PHP environment variables, configuration files, more configuration files, even PHP files you execute - all of them can potentially be infected with data to breach security and cause damage . Essentially, if the malicious data is not explicitly present in the PHP code used to make the request, then it will likely arrive as a “payload.” This assumes that a) you wrote the PHP source code, b) it has been properly peer-reviewed, and c) you have not been paid by criminal organizations.

    If you use data sources without checking that the data is completely secure and suitable for use, then you are potentially open to attack. You also need to check that the data you receive matches the data you are sending. If the data is not made completely safe for output, then you will also have serious problems. All this can be expressed as a rule for PHP “Validate input; escape the output."

    These are obvious sources of data that we must somehow control. Sources may also include client-side storage. For example, most applications recognize users by assigning them unique session IDs, which can be stored in cookies. If an attacker obtains the value from a cookie, he can impersonate another user. Although we can mitigate some of the risks associated with interception or tampering of user data, we cannot guarantee the physical security of the user's computer. We can't even guarantee that users will consider "123456" the stupidest password after "password". An additional piquancy is given by the fact that today cookies are not the only type of storage on the user’s side.

    Another risk that is often overlooked is the integrity of your source code. In PHP, application development based on a large number of loosely coupled libraries, modules and packages for frameworks is becoming increasingly popular. Many of them are downloaded from public repositories such as Github, and installed using package installers like Composer and its web companion Packagist.org. Therefore, the security of the source code is entirely dependent on the security of all these third-party services and components. If Github is compromised, it will most likely be used to distribute code with a malicious additive. If Packagist.org - then the attacker will be able to redirect package requests to their own, malicious packages.

    Currently, Composer and Packagist.org are subject to known vulnerabilities in dependency detection and package distribution, so always double check everything in your production environment and verify the source of all packages from Packagist.org.

    How can we stop them? Breaking through the security of a web application can be a task that can range from ridiculously simple to extremely time-consuming. It's fair to assume that every web application has a vulnerability somewhere. The reason is simple: all applications are made by people, and people make mistakes. So perfect security is a pipe dream. All applications may contain vulnerabilities, and the task of programmers is to minimize risks.

    You will have to think carefully to reduce the likelihood of damage from an attack on a web application. As the story progresses, I will talk about possible methods of attack. Some of them are obvious, others are not. But in any case, to solve the problem, you need to take into account some basic security principles.

    Basic Security Principles When developing security measures, their effectiveness can be assessed using the following considerations. I have already cited some above.
  • Don't trust anyone or anything.
  • Always assume the worst case scenario.
  • Apply multi-level protection (Defence-in-Depth).
  • Stick to the “Keep It Simple Stupid” (KISS) principle.
  • Adhere to the principle of “least privilege.”
  • Attackers smell ambiguity.
  • Read the documentation (RTFM), but never trust it.
  • If it hasn't been tested, it doesn't work.
  • It's always your fault!
  • Let's briefly go over all the points.1. Don't trust anyone or anything As stated above, the correct position is to assume that everything and everyone your web application interacts with wants to hack it. This includes other application components or layers that are needed to process requests. Anything and everything. No exceptions.2. Always assume the worst case scenario Many security systems have one thing in common: no matter how well they are made, each can be breached. If you take this into account, you will quickly understand the advantage of the second point. Focusing on the worst-case scenario will help you assess the scope and severity of the attack. And if it does happen, you may be able to reduce the unpleasant consequences with additional security measures and architectural changes. Perhaps the traditional solution you are using has already been replaced by something better?3. Use multi-level protection (Defense-in-Depth) Multi-level protection is borrowed from military science, because people have long realized that numerous walls, sandbags, equipment, body armor and flasks covering vital organs from enemy bullets and blades is the right approach to safety. You never know which of the above will not protect, and you need to ensure that several layers of protection allow you to rely on more than just a field fortification or battle formation. Of course, it's not just about single failures. Imagine an attacker climbing a giant medieval wall using a ladder only to find that there is another wall behind it, from which he is showered with arrows. Hackers will feel the same way.4. Keep It Simple Stupid (KISS) The best defenses are always simple. They are easy to develop, implement, understand, use and test. Simplicity reduces errors, encourages correct application performance, and facilitates implementation even in the most complex and hostile environments.5. Adhere to the principle of “least privilege” Each participant in the exchange of information (user, process, program) should have only those access rights that it needs to perform its functions.6. Attackers Smell Obscurity Security by Obscurity is based on the assumption that if you use Defense A and don't tell anyone what it is, how it works, or whether it even exists, it will magically help you because attackers are left at a loss. In reality, this only provides a slight advantage. Often, an experienced attacker will be able to figure out the measures you have taken, so you need to use explicit defenses. Those who are overly confident that unclear protection cancels the need for real protection should be specifically punished in order to get rid of illusions.7. Read the documentation (RTFM), but never trust it. The PHP manual is the Bible. Of course, it was not written by the Flying Spaghetti Monster, so technically it may contain some amount of half-truths, omissions, misinterpretations or errors that have not yet been noticed or corrected. The same goes for Stack Overflow.

    Specialized sources of security wisdom (PHP-focused and otherwise) generally provide more detailed knowledge. The closest thing to a Bible on PHP security is OWASP, which offers articles, guides, and tips. If it is not recommended to do something on OWASP, never do it!

    8. If it hasn't been tested, it doesn't work When implementing security measures, you must write all the working tests necessary for verification. Including pretend that you are a hacker for whom the prison is crying. It may seem far-fetched, but becoming familiar with web application hacking techniques is good practice; you will learn about possible vulnerabilities, and your paranoia will increase. At the same time, it is not necessary to tell management about your newly acquired gratitude for hacking a web application. Be sure to use automated tools to identify vulnerabilities. They are useful, but of course they do not replace high-quality code reviews and even manual testing of the application. The more resources you spend on testing, the more reliable your application will be.9. It's always your fault! Programmers are accustomed to thinking that security vulnerabilities will be discovered in isolated attacks, and their consequences are negligible.

    For example, data breaches (a well-documented and widespread type of hacking) are often viewed as minor security problems because they do not directly impact users. However, leaking information about software versions, development languages, source code locations, application and business logic, database structure, and other aspects of a web application's environment and internal operations is often critical to a successful attack.

    At the same time, attacks on security systems are often combinations of attacks. Individually, they are insignificant, but sometimes they open the way for other attacks. For example, SQL injection sometimes requires a specific username, which can be obtained using a Timing Attack against the administrative interface, instead of the much more expensive and visible brute force. In turn, SQL injection allows you to implement an XSS attack on a specific administrative account without attracting attention with a large number of suspicious entries in the logs.

    The danger of considering vulnerabilities in isolation is in underestimating their threat, and therefore in being too careless about them. Programmers are often lazy to fix a vulnerability because they consider it too minor. It is also common practice to shift responsibility for secure development onto end programmers or users, often without documenting specific problems: even the existence of these vulnerabilities is not acknowledged.

    The apparent insignificance is not important. It is irresponsible to force programmers or users to fix your vulnerabilities, especially if you were not even informed about them.

    Input Validation Input Validation is the outer defense perimeter of your web application. It protects the core business logic, data processing, and output generation. Literally, everything outside this perimeter, except for the code executed by the current request, is considered enemy territory. All possible entrances and exits of the perimeter are guarded day and night by militant sentries who shoot first and ask questions later. Separately guarded (and very suspicious-looking) “allies” are connected to the perimeter, including “Model”, “Database” and “File System”. Nobody wants to shoot at them, but if they try their luck... bang. Each ally has its own perimeter, which may or may not trust ours.

    Remember what I said about who you can trust? To no one and nothing. In the PHP world, the advice to not trust “user input” is everywhere. This is one of the categories according to the degree of trust. Assuming that users cannot be trusted, we assume that everything else can be trusted. This is wrong. Users are the most obvious unreliable source of input because we don't know them and we can't control them.

    Validation criteria Validation of input data is both the most obvious and the most unreliable protection of a web application. The vast majority of vulnerabilities occur due to failures in the verification system, so it is very important that this part of the protection works correctly. It may fail, but still adhere to the following considerations. Always keep in mind when implementing custom validators and using third-party validation libraries that third-party solutions tend to perform generic tasks and omit key validation procedures that your application may need. When using any libraries intended for security needs, be sure to independently check them for vulnerabilities and correct operation. I also recommend keeping in mind that PHP can exhibit strange and possibly unsafe behavior. Look at this example taken from filtering functions:

    Filter_var("php://example.org", FILTER_VALIDATE_URL);
    The filter passes without any questions. The problem is that the received php:// URL may be passed to a PHP function that expects to receive a remote HTTP address rather than return data from the executing PHP script (via the PHP handler). The vulnerability occurs because the filtering option does not have a method to restrict valid URIs. Even though the application expects an http, https or mailto link and not some PHP-specific URI. This overly general approach to testing should be avoided at all costs.

    Be careful with context Input validation should prevent unsafe data from being entered into a web application. A major stumbling block: Data security testing is typically only performed for the first intended use.

    Let's say I received data containing a name. I can check it fairly easily for apostrophes, hyphens, parentheses, spaces, and a whole host of alphanumeric Unicode characters. The name is valid data that can be used for display (first intended use). But if you use it somewhere else (for example, in a database query), it will end up in a new context. And some of the characters that are legal in a name will turn out to be dangerous in this context: if the name is converted to a string to perform SQL injection.

    It turns out that input data verification is inherently unreliable. It is most effective for cutting off clearly invalid values. Let's say when something needs to be an integer, or an alphanumeric string, or an HTTP URL. Such formats and values ​​have their limitations and, if properly verified, are less likely to pose a threat. Other values ​​(unlimited text, GET/POST arrays, and HTML) are more difficult to verify and are more likely to contain malicious data.

    Since most of the time our application will be transferring data between contexts, we can't just check all the input data and call it a day. Checking at the entrance is only the first line of protection, but by no means the only one.

    Along with checking the input data, a protection method such as shielding is very often used. With its help, data is checked for security when entering each new context. This method is usually used to protect against cross-site scripting (XSS), but it is also used in many other tasks, as a filtering tool.

    Escaping protects against misinterpretation by the recipient of outgoing data. But this is not enough - as data enters a new context, a check is needed specifically for a specific context.

    While this may be perceived as duplicating the initial input validation, the additional validation steps actually better address the specifics of the current context when data requirements are very different. For example, the data coming from a form might contain a percentage number. The first time we use it, we check that the value is indeed an integer. But when transferred to our application model, new requirements may arise: the value must fit within a certain range, which is required for the application’s business logic to work. And if this additional check is not performed in the new context, then serious problems can arise.

    Use only whitelists, not blacklists Blacklists and whitelists are the two primary approaches to validating input data. Black means checking for invalid data, and white means checking for valid data. Whitelists are preferable because only the data that we expect is transmitted during verification. In turn, blacklists only take into account programmers’ assumptions about all possible erroneous data, so it is much easier to get confused, miss something, or make a mistake.

    A good example is any validation procedure designed to make HTML safe in terms of unescaped output in the template. If we use a blacklist, then we need to check that the HTML does not contain dangerous elements, attributes, styles and executable JavaScript. This is a lot of work, and blacklist-based HTML sanitizers always manage to miss dangerous code combinations. And whitelisting tools eliminate this ambiguity by allowing only known allowed elements and attributes. All others will simply be separated, isolated or removed, regardless of what they are.

    So whitelists are preferable for any verification procedures due to higher security and reliability.

    Never attempt to correct input data Input data verification is often accompanied by filtering. If during verification we simply evaluate the correctness of the data (giving a positive or negative result), then filtering changes the data being checked so that it satisfies specific rules.

    This is usually somewhat harmful. Traditional filters include, for example, removing all characters except numbers from phone numbers (including extra parentheses and hyphens), or trimming unnecessary horizontal or vertical space. In such situations, minimal cleaning is performed to eliminate errors in display or transmission. However, you can get too carried away with using filtering to block malicious data.

    One consequence of attempting to correct the input data is that the attacker can predict the impact of your corrections. Let's say there is some invalid string value. You search for it, delete it and finish filtering. What if an attacker creates a string separated value to outwit your filter?

    alert(document.cookie);
    In this example, simply filtering by tag will do nothing: removing the explicit tag will result in the data being considered a completely valid element of the HTML script. The same can be said about filtering by any specific format. All this clearly shows why checking the input data should not be the last security loop of the application.

    Instead of trying to correct the input, simply use a whitelist-based validator and reject such input attempts entirely. And where you need to filter, always filter before performing the check, never after.

    Never trust external validation tools and constantly monitor for vulnerabilities I noted earlier that validation is necessary whenever data is transferred to a new context. This also applies to validation performed outside of the web application itself. These include validation or other restrictions applied to HTML forms in the browser. Look at this form from HTML 5 (labels omitted):

    Rep. Of Ireland United Kingdom
    HTML forms can impose restrictions on the entered data. You can limit the selection using a list of fixed items, set minimum and maximum values, and also limit the length of the text. The possibilities of HTML 5 are even wider. Browsers can check URLs and email addresses, and control dates, numbers, and ranges (though support for the latter two is fairly spotty). Browsers are also able to validate input using JavaScript regular expressions included in the template attribute.

    With all these controls out there, it's important to remember that their purpose is to improve the usability of your application. Any attacker is able to create a form that does not contain the restrictions from your original form. You can even create an HTTP client for automated form filling!

    Another example of external validation tools is receiving data from third-party APIs, such as Twitter. This social network has a good reputation and is usually trusted without question. But since we are paranoid, we shouldn’t even trust Twitter. If compromised, its responses will contain insecure data for which we will not be prepared. Therefore, even here, use your own check so as not to be defenseless if something happens.

    Where we have confidence in external verification tools, it is convenient to monitor vulnerabilities. For example, if an HTML form sets a maximum length limit and we receive input whose size has reached the limit, then it is logical to assume that the user is trying to bypass the check. This way we can log breaches in external tools and take further action against potential attacks by limiting access or the number of requests.

    Avoid type conversions in PHP PHP is not a strongly typed language, and most of its functions and operations are type unsafe. This can lead to serious problems. Moreover, it is not the values ​​themselves that are especially vulnerable, but the validators. For example:

    Assert(0 == "0ABC"); //returns TRUE assert(0 == "ABC"); //returns TRUE (even without a number at the beginning!) assert(0 === "0ABC"); //returns NULL/Gives a warning about the impossibility of checking the statement
    When designing validators, make sure you use strict comparisons and manual type conversion when the input or output values ​​may be a string. For example, forms can return a string, so if you're working with data that needs to be an integer, be sure to check its type:

    Function checkIntegerRange($int, $min, $max) ( if (is_string($int) && !ctype_digit($int)) ( return false; // contains non-digit characters ) if (!is_int((int) $int)) ( return false; // another non-integer value or greater PHP_MAX_INT ) return ($int >= $min && $int = $min && $int array("verify_peer" => TRUE))); $body = file_get_contents("https://api.example.com/search?q=sphinx", false, $context);
    UPD. In PHP 5.6+, the ssl.verify_peer option is set to TRUE by default.

    The cURL extension includes server checking out of the box, so you don't have to configure anything. However, programmers sometimes take a thoughtless approach to the security of their libraries and applications. This approach can be found in any libraries that your application will depend on.

    Curl_setopt(CURLOPT_SSL_VERIFYPEER, false);
    Disabling server verification in an SSL context or when using curl_setopt() will result in vulnerability to man-in-the-middle attacks. But it is disabled precisely in order to solve the problem of annoying errors that indicate an attack or an application’s attempts to contact a host whose SSL certificate is configured incorrectly or has expired.

    Web applications can often act as a proxy for user activity, such as a Twitter client. The least we can do is to maintain our applications to the high standards set by browsers that warn users and try to protect them from connecting to suspicious servers.

    Conclusions Often we have all the capabilities to create a secure application. But we ourselves bypass some reasonable restrictions to facilitate development, debugging and disable the output of annoying errors. Or, out of good intentions, we are trying to unnecessarily complicate the logic of the application.

    But hackers don't eat their bread in vain either. They are looking for new ways to bypass our imperfect protections and are studying vulnerabilities in the modules and libraries we use. And if our goal is to create a secure web application, then theirs is to compromise our services and data. Ultimately, we all work to improve our products.

    Do you need to determine the best time to conceive, prevent an unwanted pregnancy, or find out when sex with your partner will be best? Previously, women had to go to their doctor for a consultation, but now they have a new best friend - the smartphone.

    In recent years, a variety of apps have emerged for women that make it easy to track fertile days and ovulation times, as well as make personal notes. Apart from this, they have many other features. One such application is Glow, which is already used by 47 million women. Glow allows you to track things like women's moods and the quality and frequency of sex. Thanks to this application, it is possible to get these interesting facts about the intimate lives of women from all over the world.

    Best countries for women

    1. Do you lack intimacy? Go to Canada. It turns out that Canadian women have sex 45% more often than average app users.

    2. But be careful: Canada is a great place to get pregnant. Canadian women can get pregnant 21% easier than others.

    3. Australian women also have sex frequently - 37% more than average app users.

    4. Needless to say, women in Australia also have a good chance of getting pregnant? They are 14% higher than other users.

    5. The USA is a good place to be happy. American women are 16% more likely than other women to have sex.

    6. Worst place to be happy? Latin America. Here, women have sex 4% less often than average app users.

    Sexual appetites

    1. A woman's sexual appetite corresponds to her monthly cycle. The first day of the cycle is the first day of menstruation, which lasts approximately five days. Thus, women are least interested in sex one to five days a month.

    2. Many women report changes in energy levels or mood during this time, and this is usually associated with a decrease in sexual desire. Women are also less interested in sex for a full week after their period.

    3. Most women start having sex again on the 12th day of their cycle.

    4. Many women have regular sexual relations from 12 to 14 days of the cycle. The Glow app calls these days “peak sexy.”

    5. Women actually feel their sexiest on days 13 and 14 of their cycle. But here's the interesting thing: they don't necessarily get the best, most satisfying sex at this time.

    6. Women enjoy sex most on the last, 30th day of their cycle. This day in Glow is designated as "peak orgasms."

    Are women satisfied?

    1. Women feel happiest on days 15 and 16 of their cycle, and also if they have had a lot of sex in the previous days.

    2. Glow users registered 7.6 million sexual encounters over two years.

    3. This means that every minute at least seven women who use the Glow app are having sex.

    4. By the way, users also reported their crush 2 million times. The app also tracks the sex cycles and fertility of 88,000 couples.

    5. Unfortunately, despite existing sexual contacts, not all women are satisfied with them. Almost a third of women would rather give up sex than a smartphone.

    6. But that still means two-thirds would rather give up their phones than sex.



    Did you like the article? Share it
    Top