Transport Layer Security

Leave a comment

Source: https://en.wikipedia.org/wiki/Transport_Layer_Security

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), both of which are frequently referred to as ‘SSL’, are cryptographic protocols designed to provide communications security over a computer network.[1] They use X.509certificates and hence asymmetric cryptography to authenticate the counterpart with whom they are communicating,[2] and to negotiate a symmetric session key. This session key is then used to encrypt data flowing between the parties. This allows for data/message confidentiality, and message authentication codes for message integrity and as a by-product, message authentication.[clarification needed] Several versions of the protocols are in widespread use in applications such as web browsing,email, Internet faxing, instant messaging, and voice-over-IP (VoIP). An important property in this context is forward secrecy, so the short-term session key cannot be derived from the long-term asymmetric secret key.[3]

As a consequence of choosing X.509 certificates, certificate authorities and a public key infrastructure are necessary to verify the relation between a certificate and its owner, as well as to generate, sign, and administer the validity of certificates. While this can be more beneficial than verifying the identities via a web of trust, the 2013 mass surveillance disclosures made it more widely known that certificate authorities are a weak point from a security standpoint, allowing man-in-the-middle attacks (MITM).[4][5]

The Internet Protocol Suite places TLS and SSL as tools into the application layer, while the OSI model characterizes them as being initialized in Layer 5 (session layer) and operating in Layer 6 (presentation layer). The session layer employs a handshake using an asymmetric cipher in order to establish cipher settings and a shared key for a session; the presentation layer encrypts the rest of the communication using a symmetric cipher and the session key. TLS and SSL may be characterized to work on behalf of the underlying transport layer protocol, which carries encrypted data.

TLS is an Internet Engineering Task Force (IETF) standards track protocol, first defined in 1999 and updated in RFC 5246 (August 2008) and RFC 6176 (March 2011). It is based on the earlier SSL specifications (1994, 1995, 1996) developed by Netscape Communications[6] for adding the HTTPS protocol to their Navigator web browser.

Why do embedded systems store server’s public certificate in ROM?

Leave a comment

Source: http://security.stackexchange.com/questions/37814/why-do-embedded-systems-store-servers-public-certificate-in-rom

In the home automation scenario, smart gateway can bridge the many smart devices with Internet. In many cases, a server’s public certificate is stored in the embedded system’s ROM during manufacturing.

For example, in case of AlertMe gateway, each gateway device is manufactured with a unique ID. In addition, it also holds the public certificate of the AlertMe servers in ROM. On first boot, the gateway device generates a random RSA key pair, connects to the AlertMe servers, verifies the server’s identity (using the ROM public certificate), and gives the server its random public key.

My question is, since in the SSL/TLS connection the server will send its certificate to the gateway, why does the gateway have to store a public certificate in the ROM, before its first boot. If, like what it says, it is for the verification purpose, how does the gateway verify the server’s identity? Does it just compare the gateway’s certificate in the ROM with server’s certificate sent at SSL handshake? Can’t the embedded system contact the CA, to verify the identity of the server?

Moreover, on first boot, gateway will generate RSA key pair, and then the certificate. Where is the safest place in the Linux based gateway/embedded system to store the key?

2 Answers

For your question why the server certificate is saved in the ROM:

The saved certificate is checked against the certificate sent back by the server. This is a correct assumption from you. Therefore only one server is trusted at this moment.

You ask why the device does not simple connect to a CA. This would be another way to work. But even then the CA root certificate must be embedded in the ROM. This is needed because else how would you connect to the CA? You need also SSL/TLS to connect to the CA because else all the PKI would be senseless. So at least one certificate must be embedded in ROM.

1
As a slight side node: embedded developers are, in general, terrible at security. I did a whole presentation at a security conference on some of the more egregious stuff I’ve seen. If you’re adopting an embedded device that will sit in a privileged position in your network, or that will hold sensitive information, make sure you get it independently tested, and ensure that you place appropriate secondary controls (e.g. a hardware firewall) between it and your internal network. –  Polynomial Jun 21 ’13 at 8:26

Hardening PKI to Address the IoT and Mobile Devices

Leave a comment

Source: https://guardtime.com/blog/part-1-hardening-pki-with-ksi

PKI Primer

Public Key Infrastructure is a fairly complex set of technologies, people, policies, and procedures, which are used together to request, create, manage, store, distribute, and (ultimately) revoke digital certificates – binding public keys with an identity (such as an organization, address, personal, or email).

These certificates are the staple solution today for verifying that a public key belongs to an individual.  These technologies are used in conjunction with everything from web-based browser authentication to e-commerce and identity management for access to government and commercial e-services.

The impetus for PKI occurred in the 70s and in reality before the Internet was the Internet. Conversion of PKI schemes into successful commercial operation has been a relatively slow moving process.  In reality PKI’s progress, as an industrial identity control scheme has been slow and challenging, hampered by its complexity and vulnerabilities.

Today, binding a public key with a respective user identity occurs via a Certificate Authority (CA).  The CA is responsible for the issuance of digital certificates as a trusted third party by both the owner of the certificate and the parties relying upon the certificate.

Several trust anchors are involved to authorize the binding of public keys with user identities.  User identities are unique within each CA domain and third party Validation Authorities (VAs) can provide a validation service on behalf of the CA.  Registration Authorities (RAs), Certificate Revocation Lists (CRLs) and Online Responders (Online Certificate Status) further complicate this picture, ensuring that the validity of signatures signed with a private key cannot be legally denied by the signer (non-repudiation)[1] and  valid – that the certificate is specifically bound to an individual in a way that is legally recognized, that each certificate’s signature is valid, the current date and time are within the certificates validity period, and that the certificate(s) have not been corrupted or malformed all the way up a certificate chain to the root certificate.

PKI for M2M and IoT

Further complicating this picture, conventional PKI solutions typically require manual interaction for the certification of a public key during an identity check. While this is not a substantial issue with e-mail encryption, (the participants are natural persons), this becomes problematic with machine-to-machine (M2M) based authentication where the embedded systems are machines, which require automatic processing of certification requests.  How can any of these interactions be trusted without verification? Currently, as the only tool available for identity management, PKI as a scalable identity infrastructure has proven impractical to secure the billions of mobile devices (the Internet of Things (IoT), as well as the networks they utilize.  There has been an explosion in the number of required certificates, as each device requires it’s own unique certificate.  Moreover, many of these M2M networks are distributed and decentralized, potentially having to utilize many disparate CAs.  The framework breaks.  This liability means it is imperative to securely automate certificate provisioning, renewal, and revocation processes.

Enter Guardtime and Keyless Signature Infrastructure (KSI)

At Guardtime, we believe that in order to seriously delivery security to users using this framework, we have to understand the weakness of the tools and their components.

Guardtime and Keyless Signature Infrastructure is a significant way to strengthen PKIs weak areas without increasing costs.  With the pervasiveness of today’s threats, the Internet is now faced with a fundamentally grave situation from the multitude of attack vectors that can affect PKI security (phishing, viruses, malware, identity data losses, misconfiguration, etc).

The security community can no longer afford to blindly implement PKI technologies just because it’s the only tool in the toolbox.   PKI must be upgraded to address its scaling challenges, trust anchor, evidence portability, and administration liabilities.

What is KSI?

KSI is a technology invented by Guardtime to provide massively scalable strong data integrity, tamper evidence and backdating protection for literally any kind of digital asset. KSI provides verifiable guarantees that data has not been tampered with since it was signed.

A Guardtime signature provides proof of time, identity, and authenticity without the reliance on cryptographic keys and secrets, or trust anchors like systems administrators or Certificate Authorities.  Guardtime signatures can be verified in real-time, providing continuous integrity monitoring for literally any kind of digital asset or data object.

KSI Compliments to PKI

PKI has been tailored to enable secrecy, obfuscation and identity verification but it does require a large amount of trust be vested in one or more trust anchors; from the public Certificate Authorities to internal Certificate Management Systems and the Certificate Revocation Lists themselves.  KSI can be used to secure a PKI infrastructure and/or enhance CRLs by automating Certificate Revocation.

KSI does not require trust authorities and facilitates automated verification.  The signatures are devoid of any secret data and can be used to mathematically verify the integrity of the data, providing non-repudiation, while also protecting against backdating.

PKI vendors are developing CA suites to address the scalability and portability challenges associated with automated certificate management for large scale (such as IoT and M2M) identity management.  Guardtime can assist these PKI platform vendors to ensure the coherence and real-time resilience of their platforms, as well as strongly backstop the authenticity of identities on their ever-growing networks in a cost-effective, scalable, and compliance-related manner.

Securing Public Key Infrastructure Components with KSI

Guardtime’s Videri Gateway is the fundamental component needed to secure a complex infrastructure such as PKI.  Videri is a KSI-standards based authentication gateway (appliance) that can be used to ensure critical PKI application, credential and configuration integrity.

Videri is real-time application and integrity monitoring and validation for Public Key Infrastructure platforms.  PKI critical application, security, and static configuration components can be validated in real-time to ensure tampering and malicious attack of the infrastructure has not occurred.  Moreover, all audit and event logs associated with each PKI component become tamper evident with proveably secure and mathematically verifiable methods.  Videri integrity validation and intelligence can be exported in real-time to your Security Intelligence and Event Management (SIEM) system, or Guardtime’s GuardView SIEM for real-time alerting and dashboard management of critical PKI components, applications, subsystems, configuration files, or credentials.

KSI can be used to secure literally any kind of hierarchical CA model.  Where the CA consists of clearly defined parent/child relationships, child subordinate CAs are certified by their parent CA-issued certificates, which bind a CA’s public key to it’s identity.  Each of these relationships requires careful configuration management; planning, and the creation of operational and administrative dependencies with trust anchors that build up to the root CA .  Operational dependence on these child CAs for mission critical functions means real-time integrity monitoring is a must.  A root CA is the important point of trust in an organization and subordinate CA are created to provide administrative benefits from the root and are set up practically to separate usage, organizational division, geographic divisions, load balancing, and backup and fault tolerance.  These configurations and associated policies benefit from KSI integration.

KSI can be implemented into these systems to secure and provide real-time continuous integrity monitoring of all critical CA management applications such as Certificate Services for a particular PKI deployment.  These services include any CryptoAPIs and Cryptographic Service Provider (CSP) dependencies underlying the PKI system for cryptographic operations and private key management, as well as signing and verifying the Certificate stores themselves, which are responsible for storing and managing certificates in the enterprise.

Moreover, with the increasing use of software-based CSPs, private keys and cryptographic operations are not well isolated from the server they run on and the operating system.  With this vulnerability, application or OS tampering are common exploitation approahes to expose keys and is in fact one of PKIs most glaring fundamental vulnerabilities.  With KSI and Videri, configuration and application baselines can be monitored in real-time to ensure that software-based CSPs are secure with real-time tamper evidence of dependent applications and the OS components themselves in the event of attack.

For CA installation, KSI is implemented to sign and provide real-time validation to CA configuration files, security-related files responsible for permission management between PKI subsystems, and to ensure tamper detection of any certificate templates used by the CA and infrastructure for Certificate Services.  These include all Certificate Server services, Public Key Group policies, Issuer Statements, Certificate Database Logs and their associated configuration files (and dependencies) as well as common web enrollment applications associated with a PKI deployment.

[1] A word on liability.. It is important to consider that the Internet’s increasing reliance on PKI has in fact developed a reliance on CAs.  CA vendors have NEVER paid out in the case of fraud or stolen credentials or identities.  Looking under the hood, CAs agree that when pressed on their warranty programs, there is no substantial backing to a claim if a certificate is used maliciously.  The result:  organizations using PKI have outsourced their trust to authorities that simply have no skin in the game, nor will guarantee their security.  Where is the accountability?

Diabetes management with an infusion pump

Leave a comment

Source:

Diabetes Control and Complications Trial (DCCT)

A study by the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), conducted from 1983 to 1993 in people with type 1 diabetes, which showed that good blood glucose control significantly helped prevent or delay diabetes complications.

Diabetes, diabetes mellitus

A condition characterized by hyperglycemia (high blood glucose) resulting from the body’s inability to use blood glucose for energy. In type 1 diabetes, the pancreas no longer makes insulin and therefore blood glucose cannot enter the cells to be used for energy. In type 2 diabetes, either the pancreas does not make enough insulin or the body is unable to use insulin correctly.

Diabetic ketoacidosis (see Ketoacidosis)

Duration of insulin action The length of time that certain types of insulin remain active and available in your body after a bolus. This duration can vary greatly depending on the type of insulin you take. Only use rapid-acting insulin with the OmniPod® Insulin Management System.

Extended bolus

A feature of the OmniPod System that allows a meal bolus dose to be given over an extended period of time.

Fat

One of the three main energy sources in food. (The other two are carbohydrate and protein.) Fat is a concentrated source of energy, providing 9 calories per gram. Foods high in fat include oils, margarine, salad dressings, red meat, and whole-milk dairy foods.

Fiber

The indigestible part of plant foods. Foods that are high in fiber include broccoli, beans, raspberries, squash, whole-rain bread, and bran cereal. Fiber is a type of carbohydrate but does not raise blood glucose levels as other carbohydrates do.

Food Library

The Food Library is for reference only. (Food references contained in the library cannot be populated and used for calculations.)

The OmniPod System includes a reference library of over 1,000 common food items. The library shows each item’s carbohydrate, fat, protein, fiber, and calories for a single portion.

The items in the food library are derived from the USDA database, USDA National Nutrient Database for Standard Reference.

Glucose

A simple sugar (also known as dextrose) used by the body for energy. Without insulin, the body cannot use glucose for energy.

Hazard alarm

Notification by the PDM and Pod that a dangerous condition exists.

Healthcare provider

A professional who practices medicine or teaches people how to manage their health. All healthcare providers are a resource for valuable diabetes management information.

Hemoglobin A1c (HbA1c)

A test that measures a person’s average blood glucose level over the past 2 to 3 months. Also called glycosylated hemoglobin, the test shows the amount of glucose that sticks to the red blood cell, which is proportional to the amount of glucose in the blood.

Hyperglycemia (high blood glucose)

A higher-than-normal level of glucose in the blood; generally 250 mg/dL or higher.

Hypoglycemia (low blood glucose)

A lower-than-normal level of glucose in the blood; generally 70 mg/dL or lower. Hypoglycemia unawareness A condition in which a person does not feel or recognize the symptoms of hypoglycemia.

Infusing

Introducing a liquid substance under the skin into the body.

Infusion site

A place on the body where an infusion set or Pod is placed and cannula is inserted.

Insulin

A hormone that helps the body use glucose for energy. The beta cells of a healthy pancreas make insulin.

Insulin on board (IOB) (active insulin)

The amount of insulin that is still “active” in the body from a previous bolus dose. In the OmniPod System, insulin on board (IOB) is considered in two parts: the Insulin on Board (IOB) from a previous correction bolus and the IOB from a previous meal bolus. The amount of time insulin remains “on board” or “active” depends on each individual’s duration of insulin action. Talk with your healthcare provider to determine your duration of insulin action. The OmniPod System continually calculates the Insulin on Board (IOB) to help prevent “stacking” of bolus doses, which is a major cause of hypoglycemia.

Insulin reaction (see hypoglycemia)

Insulin-to-carbohydrate ratio (IC Ratio)

Number of grams of carbohydrate covered by one unit of insulin. For example, if your insulin-to-carbohydrate ratio is 1:15, then you need to deliver one unit of insulin to cover every fifteen grams of carbohydrate you eat. In vitro Literally, “in glass.” Refers to a biological function taking place in a laboratory dish rather than in a living organism.

Ketoacidosis (diabetic ketoacidosis or DKA)

A very serious condition in which extremely high blood glucose levels and a severe lack of insulin cause the body to break down fat for energy. The breakdown of fat releases ketones into the blood and urine. DKA can take hours or days to develop, with symptoms that include stomach pain, nausea, vomiting, fruity breath odor, and rapid breathing.

It is important to rule out ketoacidosis when you experience symptoms that might otherwise indicate the flu.

Ketones

Acidic by-products that result from the breakdown of fat for energy. The presence of ketones indicates that the body is using stored fat and muscle (instead of glucose) for energy.

Meal bolus (also known as carbohydrate bolus)

An amount of insulin administered before a meal or snack to ensure that blood glucose levels stay within the desired BG goal after a meal. The OmniPod System calculates a meal bolus by dividing the grams of carbohydrates you are about to eat by your insulin-to-carbohydrate ratio.

Multiple daily injections (MDIs)

Introducing insulin into the body with a syringe several times a day.

Occlusion

A blockage or interruption in insulin delivery.

Prime bolus

An amount of insulin used to fill the cannula, preparing it to begin delivering insulin under your skin.

Protein

One of the three main energy sources in food (the other two are carbohydrate and fat). Protein is necessary for the growth, maintenance, and repair of body cells and tissues. Protein contains 4 calories per gram. Foods high in protein include meat, poultry, fish, legumes and dairy products.

Reverse correction (negative correction) Using an individual’s correction factor (sensitivity factor), the reverse correction is a calculation that reduces a portion of a meal bolus dose when the patient’s blood glucose level is below their blood glucose target. This feature is an option in the OmniPod® Insulin Management System, which should be turned on or off according to the advice of a healthcare provider.

Sensitivity factor (see correction factor)

Sharps

Any medical item that may cause punctures or cuts to those handling them. Sharps include needles, syringes, scalpel blades, disposable razors, and broken medical glassware. Dispose of used sharps according to local waste disposal regulations.

Sharps container

A puncture-proof container used for storage and disposal of used sharps.

Soft Key

A button on the PDM whose label or function appears on the screen directly above the button. The label changes depending on the task you are performing.

Subcutaneous
Under the skin.

Suggested bolus calculator

A feature that calculates bolus doses with user-specific settings and inputs. The settings used to calculate a suggested bolus are target BG, insulin-to-carbohydrate (IC) ratio, correction factor (CF) and duration of insulin action. The inputs used to calculate a suggested bolus are current BG, carbs entered, and insulin on board. The bolus calculator can be turned Off or On in the PDM.

Target blood glucose (BG) level

The ideal number at which you would like your blood glucose level to be. The OmniPod System uses this number in calculating bolus doses.

Temp basal

A basal rate that is used to cover predictable, short-term changes in basal insulin need. Temporary rates are often used during exercise and for sick-day insulin adjustments.

Temporary basal preset

An adjustment in a basal rate, in either % or U/hr, that can be assigned a custom name and preprogrammed into the PDM.

Time segment (see basal segment)

The RedMonk Programming Language Rankings: January 2015

Leave a comment

Source: http://redmonk.com/sogrady/2015/01/14/language-rankings-1-15/

Update: These rankings have been updated. The third quarter snapshot is availablehere.

With two quarters having passed since our last snapshot, it’s time to update our programming language rankings. Since Drew Conway and John Myles White originally performed this analysis late in 2010, we have been regularly comparing the relative performance of programming languages on GitHub and Stack Overflow. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion (Stack Overflow) and usage (GitHub) in an effort to extract insights into potential future adoption trends.

In general, the process has changed little over the years. With the exception of GitHub’s decision to no longer provide language rankings on its Explore page – they are now calculated from the GitHub archive – the rankings are performed in the same manner, meaning that we can compare rankings from run to run, and year to year, with confidence.

This is brought up because one result in particular, described below, is very unusual. But in the meantime, it’s worth noting that the steady decline in correlation between rankings on GitHub and Stack Overlow observed over the last several iterations of this exercise has been arrested, at least for one quarter. After dropping from its historical .78 – .8 correlation to .74 during the Q314 rankings, the correlation between the two properties is back up to .76. It will be interesting to observe whether this is a temporary reprieve, or if the lack of correlation itself was the anomaly.

For the time being, however, the focus will remain on the current rankings. Before we continue, please keep in mind the usual caveats.

  • To be included in this analysis, a language must be observable within both GitHub and Stack Overflow.
  • No claims are made here that these rankings are representative of general usage more broadly. They are nothing more or less than an examination of the correlation between two populations we believe to be predictive of future use, hence their value.
  • There are many potential communities that could be surveyed for this analysis. GitHub and Stack Overflow are used here first because of their size and second because of their public exposure of the data necessary for the analysis. We encourage, however, interested parties to perform their own analyses using other sources.
  • All numerical rankings should be taken with a grain of salt. We rank by numbers here strictly for the sake of interest. In general, the numerical ranking is substantially less relevant than the language’s tier or grouping. In many cases, one spot on the list is not distinguishable from the next. The separation between language tiers on the plot, however, is generally representative of substantial differences in relative popularity.
  • GitHub language rankings are based on raw lines of code, which means that repositories written in a given language that include a greater number amount of code in a second language (e.g. JavaScript) will be read as the latter rather than the former.
  • In addition, the further down the rankings one goes, the less data available to rank languages by. Beyond the top tiers of languages, depending on the snapshot, the amount of data to assess is minute, and the actual placement of languages becomes less reliable the further down the list one proceeds.

(click to embiggen the chart)

Besides the above plot, which can be difficult to parse even at full size, we offer the following numerical rankings. As will be observed, this run produced several ties which are reflected below (they are listed out here alphabetically rather than consolidated as ties because the latter approach led to misunderstandings).

1 JavaScript
2 Java
3 PHP
4 Python
5 C#
5 C++
5 Ruby
8 CSS
9 C
10 Objective-C
11 Perl
11 Shell
13 R
14 Scala
15 Haskell
16 Matlab
17 Go
17 Visual Basic
19 Clojure
19 Groovy

By the narrowest of margins, JavaScript edged Java for the top spot in the rankings, but as always, the difference between the two is so marginal as to be insignificant. The most important takeaway is that the language frequently written off for dead and the language sometimes touted as the future have shown sustained growth and traction and remain, according to this measure, the most popular offerings.

Outside of that change, the Top 10 was effectively static. C++ and Ruby jumped each one spot to split fifth place with C#, but that minimal distinction reflects the lack of movement of the rest of the “Tier 1,” or top grouping of languages. PHP has not shown the ability to unseat either Java or JavaScript, but it has remained unassailable for its part in the third position. After a brief drop in Q1 of 2014, Python has been stable in the fourth spot, and the rest of the Top 10 looks much as it has for several quarters.

Further down in the rankings, however, there are several trends worth noting – one in particular.

  • R: Advocates of the language have been pleased by four consecutive gains in these rankings, but this quarter’s snapshot showed R instead holding steady at 13. This was predictable, however, given that the languages remaining ahead of it – from Java and JavaScript at the top of the rankings to Shell and Perl just ahead – are more general purpose and thus likely to be more widely used. Even if R’s grow does stall at 13, however, it will remain the most popular statistical language by this measure, and this in spite of substantial competition from general purpose alternatives like Python.
  • Go: In our last rankings, it was predicted based on its trajectory that Go would become a Top 20 language within six to twelve months. Six months following that, Go can consider that mission accomplished. In this iteration of the rankings, Go leapfrogs Visual Basic, Clojure and Groovy – and displaces Coffeescript entirely – to take number 17 on the list. Again, we caution against placing too much weight on the actual numerical position, because the differences between one spot and another can be slight, but there’s no arguing with the trendline behind Go. While the language has its critics, its growth prospects appear secure. And should theAndroid support in 1.4 mature, Go’s path to becoming a Top 10 if not Top 5 language would be clear.
  • Julia/Rust: Long two of the notable languages to watch, Julia and Rust’s growth has typically been in lockstep, though not for any particular functional reason. This time around, however, Rust outpaced Julia, jumping eight spots to 50 against Julia’s more steady progression from 57 to 56. It’s not clear what’s responsible for the differential growth, or more specifically if it’s problems with Julia, progress from Rust (with a DTrace probe, even), or both. But while both remain languages of interest, this ranking suggests that Rust might be poised to outpace its counterpart.
  • Coffeescript: As mentioned above, Coffeescript dropped out of the Top 20 languages for the first time in almost two years, and may have peaked. From its high ranking of 17 in Q3 of 2013, in the three runs since, it has clocked in at 18, 18 and now 21. The “little language that compiles into JavaScript” positioned itself as a compromise between JavaScript’s ubiquity and syntactical eccentricities, but support for it appears to be slowly eroding. How it performs in the third quarter rankings should provide more insight into whether this is a temporary dip or more permanent decline.
  • Swift: Last, there is the curious case of Swift. During our last rankings, Swift was listed as the language to watch – an obvious choice given its status as the Apple-anointed successor to the #10 language on our list, Objective-C. Being officially sanctioned as the future standard for iOS applications everywhere was obviously going to lead to growth. As was said during the Q3 rankings which marked its debut, “Swift is a language that is going to be a lot more popular, and very soon.” Even so, the growth that Swift experienced is essentially unprecedented in the history of these rankings. When we see dramatic growth from a language it typically has jumped somewhere between 5 and 10 spots, and the closer the language gets to the Top 20 or within it, the more difficult growth is to come by. And yet Swift has gone from our 68th ranked language during Q3 to number 22 this quarter, a jump of 46 spots. From its position far down on the board, Swift now finds itself one spot behind Coffeescript and just ahead of Lua. As the plot suggests, Swift’s growth is more obvious on StackOverflow than GitHub, where the most active Swift repositories are either educational or infrastructure in nature, but even so the growth has been remarkable. Given this dramatic ascension, it seems reasonable to expect that the Q3 rankings this year will see Swift as a Top 20 language.

The Net

Swift’s meteoric growth notwithstanding, the high level takeaway from these rankings is stability. The inertia of the Top 10 remains substantial, and what change there is in the back half of the Top 20 or just outside of it – from Go to Swift – is both predictable and expected. The picture these rankings paint is of an environment thoroughly driven by developers; rather than seeing a heavy concentration around one or two languages as has been an aspiration in the past, we’re seeing a heavy distribution amongst a larger number of top tier languages followed by a long tail of more specialized usage. With the exceptions mentioned above, then, there is little reason to expect dramatic change moving forward.

Update: The above language plot chart was based on an incorrect Stack Overflow tag for Common Lisp and thereby failed to incorporate existing activity on that site. This has been corrected.

How Europeans evolved white skin

Leave a comment

Source: http://news.sciencemag.org/archaeology/2015/04/how-europeans-evolved-white-skin

Most of us think of Europe as the ancestral home of white people. But a new study shows that pale skin, as well as other traits such as tallness and the ability to digest milk as adults, arrived in most of the continent relatively recently. The work, presented here last week at the 84th annual meeting of the American Association of Physical Anthropologists, offers dramatic evidence of recent evolution in Europe and shows that most modern Europeans don’t look much like those of 8000 years ago.

The origins of Europeans have come into sharp focus in the past year as researchers have sequenced the genomes of ancient populations, rather than only a few individuals. By comparing key parts of the DNA across the genomes of 83 ancient individuals from archaeological sites throughout Europe, the international team of researchers reported earlier this year that Europeans today are a mix of the blending of at least three ancient populations of hunter-gatherers and farmers who moved into Europe in separate migrations over the past 8000 years. The study revealed that a massive migration of Yamnaya herders from the steppes north of the Black Sea may have brought Indo-European languages to Europe about 4500 years ago.

Now, a new study from the same team drills down further into that remarkable data to search for genes that were under strong natural selection—including traits so favorable that they spread rapidly throughout Europe in the past 8000 years. By comparing the ancient European genomes with those of recent ones from the 1000 Genomes Project, population geneticist Iain Mathieson, a postdoc in the Harvard University lab of population geneticist David Reich, found five genes associated with changes in diet and skin pigmentation that underwent strong natural selection.

First, the scientists confirmed an earlier report that the hunter-gatherers in Europe could not digest the sugars in milk 8000 years ago, according to a poster. They also noted an interesting twist: The first farmers also couldn’t digest milk. The farmers who came from the Near East about 7800 years ago and the Yamnaya pastoralists who came from the steppes 4800 years ago lacked the version of the LCT gene that allows adults to digest sugars in milk. It wasn’t until about 4300 years ago that lactose tolerance swept through Europe.

When it comes to skin color, the team found a patchwork of evolution in different places, and three separate genes that produce light skin, telling a complex story for how European’s skin evolved to be much lighter during the past 8000 years. The modern humans who came out of Africa to originally settle Europe about 40,000 years are presumed to have had dark skin, which is advantageous in sunny latitudes. And the new data confirm that about 8500 years ago, early hunter-gatherers in Spain, Luxembourg, and Hungary also had darker skin: They lacked versions of two genes—SLC24A5 and SLC45A2—that lead to depigmentation and, therefore, pale skin in Europeans today.

But in the far north—where low light levels would favor pale skin—the team found a different picture in hunter-gatherers: Seven people from the 7700-year-old Motala archaeological site in southern Sweden had both light skin gene variants, SLC24A5 and SLC45A2. They also had a third gene, HERC2/OCA2, which causes blue eyes and may also contribute to light skin and blond hair. Thus ancient hunter-gatherers of the far north were already pale and blue-eyed, but those of central and southern Europe had darker skin.

Then, the first farmers from the Near East arrived in Europe; they carried both genes for light skin. As they interbred with the indigenous hunter-gatherers, one of their light-skin genes swept through Europe, so that central and southern Europeans also began to have lighter skin. The other gene variant, SLC45A2, was at low levels until about 5800 years ago when it swept up to high frequency.

The team also tracked complex traits, such as height, which are the result of the interaction of many genes. They found that selection strongly favored several gene variants for tallness in northern and central Europeans, starting 8000 years ago, with a boost coming from the Yamnaya migration, starting 4800 years ago. The Yamnaya have the greatest genetic potential for being tall of any of the populations, which is consistent with measurements of their ancient skeletons. In contrast, selection favored shorter people in Italy and Spain starting 8000 years ago, according to the paper now posted on the bioRxiv preprint server. Spaniards, in particular, shrank in stature 6000 years ago, perhaps as a result of adapting to colder temperatures and a poor diet.

Surprisingly, the team found no immune genes under intense selection, which is counter to hypotheses that diseases would have increased after the development of agriculture.

The paper doesn’t specify why these genes might have been under such strong selection. But the likely explanation for the pigmentation genes is to maximize vitamin D synthesis, said paleoanthropologist Nina Jablonski of Pennsylvania State University (Penn State), University Park, as she looked at the poster’s results at the meeting. People living in northern latitudes often don’t get enough UV to synthesize vitamin D in their skin so natural selection has favored two genetic solutions to that problem—evolving pale skin that absorbs UV more efficiently or favoring lactose tolerance to be able to digest the sugars and vitamin D naturally found in milk. “What we thought was a fairly simple picture of the emergence of depigmented skin in Europe is an exciting patchwork of selection as populations disperse into northern latitudes,” Jablonski says. “This data is fun because it shows how much recent evolution has taken place.”

Anthropological geneticist George Perry, also of Penn State, notes that the work reveals how an individual’s genetic potential is shaped by their diet and adaptation to their habitat. “We’re getting a much more detailed picture now of how selection works.”

Posted in Archaeology, Biology, Europe, Evolution Human Evolution

Tarek Fatah – Islamic State, Islam, India and Pakistan

Leave a comment

Source: https://www.youtube.com/watch?v=SoSENbL4v5E

Older Entries