- Mar 08, 2016
-
-
Todd Short authored
* Perform ALPN after the SNI callback; the SSL_CTX may change due to that processing * Add flags to indicate that we actually sent ALPN, to properly error out if unexpectedly received. * clean up ssl3_free() no need to explicitly clear when doing memset * document ALPN functions Signed-off-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Emilia Käsper <emilia@openssl.org>
-
Richard Levitte authored
Reviewed-by: Matt Caswell <matt@openssl.org>
-
Richard Levitte authored
For config targets such as 'dist', which doesn't have a BASE template, we still need to have a default build scheme. Additionally, the unified Makefile template's target 'tar' wasn't quite as flexible as the unixmake one. Finally, .travis-create-release.sh can be somewhat simplified now that it builds with the unified build scheme. Reviewed-by: Matt Caswell <matt@openssl.org>
-
Richard Levitte authored
We want to preserve a couple of classic builds still, the quickest is to change all --unified to --classic Reviewed-by: Matt Caswell <matt@openssl.org>
-
Richard Levitte authored
Reviewed-by: Andy Polyakov <appro@openssl.org>
-
Richard Levitte authored
Because of the unified scheme, building on different platforms is very similar. We currently have Unix and OpenVMS on the unified scheme, which means that a separate INSTALL.VMS is no longer needed. Reviewed-by: Matt Caswell <matt@openssl.org>
-
Richard Levitte authored
Reviewed-by: Matt Caswell <matt@openssl.org>
-
Richard Levitte authored
Reviewed-by: Matt Caswell <matt@openssl.org>
-
Richard Levitte authored
This reverts commit 242ffb05 . Reviewed-by: Matt Caswell <matt@openssl.org>
-
Matt Caswell authored
Not all platforms support multiblock. Building without it fails prior to this fix. RT#4396 Reviewed-by: Richard Levitte <levitte@openssl.org>
-
- Mar 07, 2016
-
-
Viktor Dukhovni authored
Bug reported by Michel Sales. Reviewed-by: Rich Salz <rsalz@openssl.org>
-
Dr. Stephen Henson authored
The kinv/r fields in the DSA structure are not used by OpenSSL internally and should not be used in general. Reviewed-by: Rich Salz <rsalz@openssl.org>
-
Mat authored
Add UI to known algorithms in mkdef.pl Signed-off-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Richard Levitte <levitte@openssl.org>
-
Kurt Roeckx authored
Reviewed-by: Rich Salz <rsalz@openssl.org> GH: #811
-
Matt Caswell authored
The numpipes argument to ssl3_enc/tls1_enc is actually the number of records passed in the array. To make this clearer rename the argument to |n_recs|. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Rename the have_whole_app_data_record_waiting() function to include the ssl3_record prefix...and make it a bit shorter. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Move out most of the boiler plate code that is common between aes128-cbc and aes128-cbc-hmac-sha1 into helper functions to improve code reuse. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
The _hidden_* variables were being created on-the-fly. It is better to create them once up front during bind to avoid any potential race conditions. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
We had the function EVP_CIPHER_CTX_cipher_data which is newly added for 1.1.0. As we now also need an EVP_CIPHER_CTX_set_cipher_data it makes more sense for the former to be called EVP_CIPHER_CTX_get_cipher_data. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Ensure that a value of 0 is correctly handled for the split_send_frag argument. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
The new pipeline code added a new function EVP_CIPHER_CTX_set_cipher_data(). Add documentation for this and the existing EVP_CIPHER_CTX_cipher_data() function. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
We used to use the wrec field in the record layer for keeping track of the current record that we are writing out. As part of the pipelining changes this has been moved to stack allocated variables to do the same thing, therefore the field is no longer needed. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Update a comment that was out of date due to the pipelining changes Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Document the new split_send_frag, max_pipelines and read_buf options. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
A previous commit added the SSL_has_pending() function which provides a method for knowing whether OpenSSL has buffered, but as yet unprocessed record data. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Add some documentation for all of the SSL/SSL_CTX functions/ctrls for conrolling read and write pipelining. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Fix an erroenous fall through when setting the max_pipelines value. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Previously s_client and s_server relied on using SSL_pending() which does not take into account read_ahead. For read pipelining to work, read_ahead gets set automatically. Therefore s_client and s_server have been converted to use SSL_has_pending() instead. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
This is similar to SSL_pending() but just returns a 1 if there is data pending in the internal OpenSSL buffers or 0 otherwise (as opposed to SSL_pending() which returns the number of bytes available). Unlike SSL_pending() this will work even if "read_ahead" is set (which is the case if you are using read pipelining, or if you are doing DTLS). A 1 return value means that we have unprocessed data. It does *not* necessarily indicate that there will be application data returned from a call to SSL_read(). The unprocessed data may not be application data or there could be errors when we attempt to parse the records. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
This capability is required for read pipelining. We will only read in as many records as will fit in the read buffer (and the network can provide in one go). The bigger the buffer the more records we can process in parallel. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
With read pipelining we use multiple SSL3_RECORD structures for reading. There are SSL_MAX_PIPELINES (32) of them defined (typically not all of these would be used). Each one has a 16k compression buffer allocated! This results in a significant amount of memory being consumed which, most of the time, is not needed. This change swaps the allocation of the compression buffer to be lazy so that it is only done immediately before it is actually used. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Read pipelining is controlled in a slightly different way than with write pipelining. While reading we are constrained by the number of records that the peer (and the network) can provide to us in one go. The more records we can get in one go the more opportunity we have to parallelise the processing. There are two parameters that affect this: * The number of pipelines that we are willing to process in one go. This is controlled by max_pipelines (as for write pipelining) * The size of our read buffer. A subsequent commit will provide an API for adjusting the size of the buffer. Another requirement for this to work is that "read_ahead" must be set. The read_ahead parameter will attempt to read as much data into our read buffer as the network can provide. Without this set, data is read into the read buffer on demand. Setting the max_pipelines parameter to a value greater than 1 will automatically also turn read_ahead on. Finally, the read pipelining as currently implemented will only parallelise the processing of application data records. This would only make a difference for renegotiation so is unlikely to have a significant impact. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Add dummy pipline support to dasync for the aes128_cbc_hmac_sha1 cipher. This is treated as an AEAD cipher. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Add the options min_send_frag and max_pipelines to s_server and s_client in order to control pipelining capabilities. This will only have an effect if a pipeline capable cipher is used (such as the one provided by the dasync engine). Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Use the new pipeline cipher capability to encrypt multiple records being written out all in one go. Two new SSL/SSL_CTX parameters can be used to control how this works: max_pipelines and split_send_fragment. max_pipelines defines the maximum number of pipelines that can ever be used in one go for a single connection. It must always be less than or equal to SSL_MAX_PIPELINES (currently defined to be 32). By default only one pipeline will be used (i.e. normal non-parallel operation). split_send_fragment defines how data is split up into pipelines. The number of pipelines used will be determined by the amount of data provided to the SSL_write call divided by split_send_fragment. For example if split_send_fragment is set to 2000 and max_pipelines is 4 then: SSL_write called with 0-2000 bytes == 1 pipeline used SSL_write called with 2001-4000 bytes == 2 pipelines used SSL_write called with 4001-6000 bytes == 3 pipelines used SSL_write_called with 6001+ bytes == 4 pipelines used split_send_fragment must always be less than or equal to max_send_fragment. By default it is set to be equal to max_send_fragment. This will mean that the same number of records will always be created as would have been created in the non-parallel case, although the data will be apportioned differently. In the parallel case data will be spread equally between the pipelines. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Implement aes128-cbc as a pipeline capable cipher in the dasync engine. As dasync is just a dummy engine, it actually just performs the parallel encrypts/decrypts in serial. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Add a flag to indicate that a cipher is capable of performing "pipelining", i.e. multiple encrypts/decrypts in parallel. Also add some new ctrls that ciphers will need to implement if they are pipeline capable. Reviewed-by: Tim Hudson <tjh@openssl.org>
-
Matt Caswell authored
Reviewed-by: Rich Salz <rsalz@openssl.org>
-
Emilia Kasper authored
- Only build & test two configurations. Make all the other build variants buildonly on gcc (clang on osx). - Don't build with default clang at all on linux. - Only use gcc-5 and clang-3.6 for the sanitizer builds. Re-running e.g. CONFIG_OPTS="shared" with them seems redundant. Reviewed-by: Richard Levitte <levitte@openssl.org>
-