Compare commits

...

16 Commits
master ... f23

Author SHA1 Message Date
Kamil Dudka
13ec13d953 Resolves: CVE-2016-7167 - reject negative string lengths in curl_easy_[un]escape() 2016-09-14 12:27:31 +02:00
Kamil Dudka
36b153054a work around race condition in PK11_FindSlotByName()
Bug: https://bugzilla.mozilla.org/1297397
2016-08-26 15:55:55 +02:00
Kamil Dudka
bb64ce4e2e Related: CVE-2016-5420 - fix incorrect use of a previously loaded certificate from file 2016-08-26 15:54:16 +02:00
Kamil Dudka
ca9e2d56b2 Resolves: CVE-2016-5420 - fix re-using connections with wrong client cert 2016-08-03 17:11:45 +02:00
Kamil Dudka
1c9b12b033 Resolves: CVE-2016-5419 - fix TLS session resumption client cert bypass 2016-08-03 17:11:35 +02:00
Kamil Dudka
a91699a8d3 Resolves: CVE-2016-5421 - fix use of connection struct after free 2016-08-03 17:11:24 +02:00
Kamil Dudka
8e287ada5e Resolves: #1340757 - fix SIGSEGV of the curl tool
... while parsing URL with too many globs
2016-06-03 13:37:37 +02:00
Kamil Dudka
88c54d8197 tests/sshserver.pl: use RSA instead of DSA for host auth
DSA is no longer supported by OpenSSH 7.0, which causes all SCP/SFTP
test cases to be skipped.  Using RSA for host authentication works with
both old and new versions of OpenSSH.

Reported-by: Karlson2k

Closes #676

Upstream-commit: effa575fc7f028ee71fda16209d3d81af336b730
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
2016-02-25 13:04:57 +01:00
Kamil Dudka
0c9fbb7ebe Resolve: #1311907 - cookie: fix bug in export if any-domain cookie is present 2016-02-25 11:21:22 +01:00
Kamil Dudka
c70c78b593 Resolves: CVE-2016-0755 - match credentials when re-using a proxy connection 2016-01-27 12:29:44 +01:00
Kamil Dudka
e955dd2f2b Resolves: #1104597 - prevent NSS from incorrectly re-using a session 2015-09-18 18:29:08 +02:00
Kamil Dudka
45d6457526 prevent test46 from failing due to expired cookie 2015-08-27 16:10:28 +02:00
Kamil Dudka
d6de9efc29 better explain the conditional BR on valgrind 2015-08-27 16:07:15 +02:00
Kamil Dudka
0b066134ee Resolves: #1248389 - prevent dnf from crashing when using both FTP and HTTP 2015-07-30 15:43:53 +02:00
Kamil Dudka
b7c5c6ea4b test1801: completely disable the test-case
Bug: https://github.com/bagder/curl/commit/21e82bd6#commitcomment-12226582
2015-07-30 15:43:13 +02:00
Kamil Dudka
5dc5cd8084 build support for the HTTP/2 protocol 2015-07-30 15:43:10 +02:00
13 changed files with 997 additions and 4 deletions

View File

@ -0,0 +1,111 @@
From 2f8154c11e2cc139067973e47f1ffe5a302fb89d Mon Sep 17 00:00:00 2001
From: Kamil Dudka <kdudka@redhat.com>
Date: Thu, 30 Jul 2015 12:01:20 +0200
Subject: [PATCH] http: move HTTP/2 cleanup code off http_disconnect()
Otherwise it would never be called for an HTTP/2 connection, which has
its own disconnect handler.
I spotted this while debugging <https://bugzilla.redhat.com/1248389>
where the http_disconnect() handler was called on an FTP session handle
causing 'dnf' to crash. conn->data->req.protop of type (struct FTP *)
was reinterpreted as type (struct HTTP *) which resulted in SIGSEGV in
Curl_add_buffer_free() after printing the "Connection cache is full,
closing the oldest one." message.
A previously working version of libcurl started to crash after it was
recompiled with the HTTP/2 support despite the HTTP/2 protocol was not
actually used. This commit makes it work again although I suspect the
root cause (reinterpreting session handle data of incompatible protocol)
still has to be fixed. Otherwise the same will happen when mixing FTP
and HTTP/2 connections and exceeding the connection cache limit.
Reported-by: Tomas Tomecek
Bug: https://bugzilla.redhat.com/1248389
Upstream-commit: f7dcc7c11817f6eaee61b1cd84ffc1b2b1fcac43
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/http.c | 25 ++-----------------------
lib/http2.c | 11 +++++++++++
2 files changed, 13 insertions(+), 23 deletions(-)
diff --git a/lib/http.c b/lib/http.c
index a1eef81..8d5b9a4 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -86,7 +86,6 @@
* Forward declarations.
*/
-static CURLcode http_disconnect(struct connectdata *conn, bool dead);
static int http_getsock_do(struct connectdata *conn,
curl_socket_t *socks,
int numsocks);
@@ -117,7 +116,7 @@ const struct Curl_handler Curl_handler_http = {
http_getsock_do, /* doing_getsock */
ZERO_NULL, /* domore_getsock */
ZERO_NULL, /* perform_getsock */
- http_disconnect, /* disconnect */
+ ZERO_NULL, /* disconnect */
ZERO_NULL, /* readwrite */
PORT_HTTP, /* defport */
CURLPROTO_HTTP, /* protocol */
@@ -141,7 +140,7 @@ const struct Curl_handler Curl_handler_https = {
http_getsock_do, /* doing_getsock */
ZERO_NULL, /* domore_getsock */
ZERO_NULL, /* perform_getsock */
- http_disconnect, /* disconnect */
+ ZERO_NULL, /* disconnect */
ZERO_NULL, /* readwrite */
PORT_HTTPS, /* defport */
CURLPROTO_HTTPS, /* protocol */
@@ -168,21 +167,6 @@ CURLcode Curl_http_setup_conn(struct connectdata *conn)
return CURLE_OK;
}
-static CURLcode http_disconnect(struct connectdata *conn, bool dead_connection)
-{
-#ifdef USE_NGHTTP2
- struct HTTP *http = conn->data->req.protop;
- if(http) {
- Curl_add_buffer_free(http->header_recvbuf);
- http->header_recvbuf = NULL; /* clear the pointer */
- }
-#else
- (void)conn;
-#endif
- (void)dead_connection;
- return CURLE_OK;
-}
-
/*
* checkheaders() checks the linked list of custom HTTP headers for a
* particular header (prefix).
diff --git a/lib/http2.c b/lib/http2.c
index 1a2c486..eec0c9f 100644
--- a/lib/http2.c
+++ b/lib/http2.c
@@ -79,6 +79,7 @@ static int http2_getsock(struct connectdata *conn,
static CURLcode http2_disconnect(struct connectdata *conn,
bool dead_connection)
{
+ struct HTTP *http = conn->data->req.protop;
struct http_conn *c = &conn->proto.httpc;
(void)dead_connection;
@@ -88,6 +89,11 @@ static CURLcode http2_disconnect(struct connectdata *conn,
Curl_safefree(c->inbuf);
Curl_hash_destroy(&c->streamsh);
+ if(http) {
+ Curl_add_buffer_free(http->header_recvbuf);
+ http->header_recvbuf = NULL; /* clear the pointer */
+ }
+
DEBUGF(infof(conn->data, "HTTP/2 DISCONNECT done\n"));
return CURLE_OK;
--
2.4.6

View File

@ -0,0 +1,42 @@
From c90b930b8312bb31f62325a09125cf44dd58d506 Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Mon, 10 Aug 2015 00:12:12 +0200
Subject: [PATCH] test46: update cookie expire time
... since it went old and thus was expired and caused the test to fail!
Upstream-commit: 002d58f1e8d8e725ba6d676599838983561feff9
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
tests/data/test46 | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tests/data/test46 b/tests/data/test46
index b6f8f83..b6ebe80 100644
--- a/tests/data/test46
+++ b/tests/data/test46
@@ -51,8 +51,8 @@ TZ=GMT
www.fake.come FALSE / FALSE 1022144953 cookiecliente si
www.loser.com FALSE / FALSE 1139150993 UID 99
-%HOSTIP FALSE / FALSE 1439150993 mooo indeed
-#HttpOnly_%HOSTIP FALSE /want FALSE 1439150993 mooo2 indeed2
+%HOSTIP FALSE / FALSE 1739150993 mooo indeed
+#HttpOnly_%HOSTIP FALSE /want FALSE 1739150993 mooo2 indeed2
%HOSTIP FALSE /want FALSE 0 empty
</file>
</client>
@@ -76,8 +76,8 @@ Cookie: empty=; mooo2=indeed2; mooo=indeed
www.fake.come FALSE / FALSE 1022144953 cookiecliente si
www.loser.com FALSE / FALSE 1139150993 UID 99
-%HOSTIP FALSE / FALSE 1439150993 mooo indeed
-#HttpOnly_%HOSTIP FALSE /want FALSE 1439150993 mooo2 indeed2
+%HOSTIP FALSE / FALSE 1739150993 mooo indeed
+#HttpOnly_%HOSTIP FALSE /want FALSE 1739150993 mooo2 indeed2
%HOSTIP FALSE /want FALSE 0 empty
%HOSTIP FALSE / FALSE 2054030187 ckyPersistent permanent
%HOSTIP FALSE / FALSE 0 ckySession temporary
--
2.4.6

View File

@ -0,0 +1,71 @@
From 98dee5ab5a862a506beb8a7bf60c0aaec3b08a0f Mon Sep 17 00:00:00 2001
From: Kamil Dudka <kdudka@redhat.com>
Date: Fri, 18 Sep 2015 17:07:22 +0200
Subject: [PATCH 1/2] nss: check return values of NSS functions
Upstream-commit: a9fd53887ba07cd8313a8b9706f2dc71d6b8ed1b
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/vtls/nss.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/lib/vtls/nss.c b/lib/vtls/nss.c
index 91727c7..1fa1c64 100644
--- a/lib/vtls/nss.c
+++ b/lib/vtls/nss.c
@@ -1792,9 +1792,13 @@ static CURLcode nss_setup_connect(struct connectdata *conn, int sockindex)
/* Force handshake on next I/O */
- SSL_ResetHandshake(connssl->handle, /* asServer */ PR_FALSE);
+ if(SSL_ResetHandshake(connssl->handle, /* asServer */ PR_FALSE)
+ != SECSuccess)
+ goto error;
- SSL_SetURL(connssl->handle, conn->host.name);
+ /* propagate hostname to the TLS layer */
+ if(SSL_SetURL(connssl->handle, conn->host.name) != SECSuccess)
+ goto error;
return CURLE_OK;
--
2.5.2
From d082ad368ecec7894d8e9e9a35336b2350c30ade Mon Sep 17 00:00:00 2001
From: Kamil Dudka <kdudka@redhat.com>
Date: Fri, 18 Sep 2015 17:10:05 +0200
Subject: [PATCH 2/2] nss: prevent NSS from incorrectly re-using a session
Without this workaround, NSS re-uses a session cache entry despite the
server name does not match. This causes SNI host name to differ from
the actual host name. Consequently, certain servers (e.g. github.com)
respond by 400 to such requests.
Bug: https://bugzilla.mozilla.org/1202264
Upstream-commit: 958d2ffb198166a062a0ff20d009c64972a2b374
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/vtls/nss.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/lib/vtls/nss.c b/lib/vtls/nss.c
index 1fa1c64..3d73ffe 100644
--- a/lib/vtls/nss.c
+++ b/lib/vtls/nss.c
@@ -1800,6 +1800,10 @@ static CURLcode nss_setup_connect(struct connectdata *conn, int sockindex)
if(SSL_SetURL(connssl->handle, conn->host.name) != SECSuccess)
goto error;
+ /* prevent NSS from re-using the session for a different hostname */
+ if(SSL_SetSockPeerID(connssl->handle, conn->host.name) != SECSuccess)
+ goto error;
+
return CURLE_OK;
error:
--
2.5.2

View File

@ -0,0 +1,137 @@
From 43f8d61ef18639c8d8573c0c1d2bdfa56407bae6 Mon Sep 17 00:00:00 2001
From: Isaac Boukris <iboukris@gmail.com>
Date: Wed, 13 Jan 2016 11:05:51 +0200
Subject: [PATCH] NTLM: Fix ConnectionExists to compare Proxy credentials
Proxy NTLM authentication should compare credentials when
re-using a connection similar to host authentication, as it
authenticate the connection.
Example:
curl -v -x http://proxy:port http://host/ -U good_user:good_pwd
--proxy-ntlm --next -x http://proxy:port http://host/
[-U fake_user:fake_pwd --proxy-ntlm]
CVE-2016-0755
Bug: http://curl.haxx.se/docs/adv_20160127A.html
Upstream-commit: d41dcba4e9b69d6b761e3460cc6ae7e8fd8f621f
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/url.c | 62 ++++++++++++++++++++++++++++++++++++++++----------------------
1 file changed, 40 insertions(+), 22 deletions(-)
diff --git a/lib/url.c b/lib/url.c
index 17279bb..f32c8cf 100644
--- a/lib/url.c
+++ b/lib/url.c
@@ -3107,12 +3107,17 @@ ConnectionExists(struct SessionHandle *data,
struct connectdata *check;
struct connectdata *chosen = 0;
bool canPipeline = IsPipeliningPossible(data, needle);
+ struct connectbundle *bundle;
+
#ifdef USE_NTLM
- bool wantNTLMhttp = ((data->state.authhost.want & CURLAUTH_NTLM) ||
- (data->state.authhost.want & CURLAUTH_NTLM_WB)) &&
- (needle->handler->protocol & PROTO_FAMILY_HTTP) ? TRUE : FALSE;
+ bool wantNTLMhttp = ((data->state.authhost.want &
+ (CURLAUTH_NTLM | CURLAUTH_NTLM_WB)) &&
+ (needle->handler->protocol & PROTO_FAMILY_HTTP));
+ bool wantProxyNTLMhttp = (needle->bits.proxy_user_passwd &&
+ ((data->state.authproxy.want &
+ (CURLAUTH_NTLM | CURLAUTH_NTLM_WB)) &&
+ (needle->handler->protocol & PROTO_FAMILY_HTTP)));
#endif
- struct connectbundle *bundle;
*force_reuse = FALSE;
*waitpipe = FALSE;
@@ -3152,9 +3157,6 @@ ConnectionExists(struct SessionHandle *data,
curr = bundle->conn_list->head;
while(curr) {
bool match = FALSE;
-#if defined(USE_NTLM)
- bool credentialsMatch = FALSE;
-#endif
size_t pipeLen;
/*
@@ -3262,21 +3264,14 @@ ConnectionExists(struct SessionHandle *data,
continue;
}
- if((!(needle->handler->flags & PROTOPT_CREDSPERREQUEST))
-#ifdef USE_NTLM
- || (wantNTLMhttp || check->ntlm.state != NTLMSTATE_NONE)
-#endif
- ) {
- /* This protocol requires credentials per connection or is HTTP+NTLM,
+ if(!(needle->handler->flags & PROTOPT_CREDSPERREQUEST)) {
+ /* This protocol requires credentials per connection,
so verify that we're using the same name and password as well */
if(!strequal(needle->user, check->user) ||
!strequal(needle->passwd, check->passwd)) {
/* one of them was different */
continue;
}
-#if defined(USE_NTLM)
- credentialsMatch = TRUE;
-#endif
}
if(!needle->bits.httpproxy || needle->handler->flags&PROTOPT_SSL ||
@@ -3335,20 +3330,43 @@ ConnectionExists(struct SessionHandle *data,
possible. (Especially we must not reuse the same connection if
partway through a handshake!) */
if(wantNTLMhttp) {
- if(credentialsMatch && check->ntlm.state != NTLMSTATE_NONE) {
- chosen = check;
+ if(!strequal(needle->user, check->user) ||
+ !strequal(needle->passwd, check->passwd))
+ continue;
+ }
+ else if(check->ntlm.state != NTLMSTATE_NONE) {
+ /* Connection is using NTLM auth but we don't want NTLM */
+ continue;
+ }
+
+ /* Same for Proxy NTLM authentication */
+ if(wantProxyNTLMhttp) {
+ if(!strequal(needle->proxyuser, check->proxyuser) ||
+ !strequal(needle->proxypasswd, check->proxypasswd))
+ continue;
+ }
+ else if(check->proxyntlm.state != NTLMSTATE_NONE) {
+ /* Proxy connection is using NTLM auth but we don't want NTLM */
+ continue;
+ }
+
+ if(wantNTLMhttp || wantProxyNTLMhttp) {
+ /* Credentials are already checked, we can use this connection */
+ chosen = check;
+ if((wantNTLMhttp &&
+ (check->ntlm.state != NTLMSTATE_NONE)) ||
+ (wantProxyNTLMhttp &&
+ (check->proxyntlm.state != NTLMSTATE_NONE))) {
/* We must use this connection, no other */
*force_reuse = TRUE;
break;
}
- else if(credentialsMatch)
- /* this is a backup choice */
- chosen = check;
+
+ /* Continue look up for a better connection */
continue;
}
#endif
-
if(canPipeline) {
/* We can pipeline if we want to. Let's continue looking for
the optimal connection to use, i.e the shortest pipe that is not
--
2.5.0

View File

@ -0,0 +1,63 @@
From 635c0837cfb774053238a691378716286842d886 Mon Sep 17 00:00:00 2001
From: Jay Satiro <raysatiro@yahoo.com>
Date: Thu, 18 Jun 2015 19:35:04 -0400
Subject: [PATCH] cookie: Fix bug in export if any-domain cookie is present
In 3013bb6 I had changed cookie export to ignore any-domain cookies,
however the logic I used to do so was incorrect, and would lead to a
busy loop in the case of exporting a cookie list that contained
any-domain cookies. The result of that is worse though, because in that
case the other cookies would not be written resulting in an empty file
once the application is terminated to stop the busy loop.
Upstream-commit: ef0fdb83b89c87b63e94bf6ecdab5cd8c6458b2e
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/cookie.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/lib/cookie.c b/lib/cookie.c
index 94f2a8b..22730cf 100644
--- a/lib/cookie.c
+++ b/lib/cookie.c
@@ -1274,9 +1274,8 @@ static int cookie_output(struct CookieInfo *c, const char *dumphere)
"# http://curl.haxx.se/docs/http-cookies.html\n"
"# This file was generated by libcurl! Edit at your own risk.\n\n",
out);
- co = c->cookies;
- while(co) {
+ for(co = c->cookies; co; co = co->next) {
if(!co->domain)
continue;
format_ptr = get_netscape_format(co);
@@ -1288,7 +1287,6 @@ static int cookie_output(struct CookieInfo *c, const char *dumphere)
}
fprintf(out, "%s\n", format_ptr);
free(format_ptr);
- co=co->next;
}
}
@@ -1309,9 +1307,7 @@ struct curl_slist *Curl_cookie_list(struct SessionHandle *data)
(data->cookies->numcookies == 0))
return NULL;
- c = data->cookies->cookies;
-
- while(c) {
+ for(c = data->cookies->cookies; c; c = c->next) {
if(!c->domain)
continue;
line = get_netscape_format(c);
@@ -1326,7 +1322,6 @@ struct curl_slist *Curl_cookie_list(struct SessionHandle *data)
return NULL;
}
list = beg;
- c = c->next;
}
return list;
--
2.5.0

View File

@ -0,0 +1,73 @@
From d4211b7d47747af9d36796517167cce14ad5e47b Mon Sep 17 00:00:00 2001
From: Kamil Dudka <kdudka@redhat.com>
Date: Tue, 23 Feb 2016 10:31:52 +0100
Subject: [PATCH] tests/sshserver.pl: use RSA instead of DSA for host auth
DSA is no longer supported by OpenSSH 7.0, which causes all SCP/SFTP
test cases to be skipped. Using RSA for host authentication works with
both old and new versions of OpenSSH.
Reported-by: Karlson2k
Closes #676
Upstream-commit: effa575fc7f028ee71fda16209d3d81af336b730
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
tests/sshhelp.pm | 4 ++--
tests/sshserver.pl | 12 ++++++------
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/tests/sshhelp.pm b/tests/sshhelp.pm
index 914879b..6719f9f 100644
--- a/tests/sshhelp.pm
+++ b/tests/sshhelp.pm
@@ -120,8 +120,8 @@ $sshlog = undef; # ssh client log file
$sftplog = undef; # sftp client log file
$sftpcmds = 'curl_sftp_cmds'; # sftp client commands batch file
$knownhosts = 'curl_client_knownhosts'; # ssh knownhosts file
-$hstprvkeyf = 'curl_host_dsa_key'; # host private key file
-$hstpubkeyf = 'curl_host_dsa_key.pub'; # host public key file
+$hstprvkeyf = 'curl_host_rsa_key'; # host private key file
+$hstpubkeyf = 'curl_host_rsa_key.pub'; # host public key file
$cliprvkeyf = 'curl_client_key'; # client private key file
$clipubkeyf = 'curl_client_key.pub'; # client public key file
diff --git a/tests/sshserver.pl b/tests/sshserver.pl
index d8c2d6f..a99731a 100755
--- a/tests/sshserver.pl
+++ b/tests/sshserver.pl
@@ -371,12 +371,12 @@ if((! -e $hstprvkeyf) || (! -s $hstprvkeyf) ||
# Make sure all files are gone so ssh-keygen doesn't complain
unlink($hstprvkeyf, $hstpubkeyf, $cliprvkeyf, $clipubkeyf);
logmsg 'generating host keys...' if($verbose);
- if(system "\"$sshkeygen\" -q -t dsa -f $hstprvkeyf -C 'curl test server' -N ''") {
+ if(system "\"$sshkeygen\" -q -t rsa -f $hstprvkeyf -C 'curl test server' -N ''") {
logmsg 'Could not generate host key';
exit 1;
}
logmsg 'generating client keys...' if($verbose);
- if(system "\"$sshkeygen\" -q -t dsa -f $cliprvkeyf -C 'curl test client' -N ''") {
+ if(system "\"$sshkeygen\" -q -t rsa -f $cliprvkeyf -C 'curl test client' -N ''") {
logmsg 'Could not generate client key';
exit 1;
}
@@ -729,11 +729,11 @@ if(system "\"$sshd\" -t -f $sshdconfig > $sshdlog 2>&1") {
if((! -e $knownhosts) || (! -s $knownhosts)) {
logmsg 'generating ssh client known hosts file...' if($verbose);
unlink($knownhosts);
- if(open(DSAKEYFILE, "<$hstpubkeyf")) {
- my @dsahostkey = do { local $/ = ' '; <DSAKEYFILE> };
- if(close(DSAKEYFILE)) {
+ if(open(RSAKEYFILE, "<$hstpubkeyf")) {
+ my @rsahostkey = do { local $/ = ' '; <RSAKEYFILE> };
+ if(close(RSAKEYFILE)) {
if(open(KNOWNHOSTS, ">$knownhosts")) {
- print KNOWNHOSTS "$listenaddr ssh-dss $dsahostkey[1]\n";
+ print KNOWNHOSTS "$listenaddr ssh-rsa $rsahostkey[1]\n";
if(!close(KNOWNHOSTS)) {
$error = "Error: cannot close file $knownhosts";
}
--
2.5.0

View File

@ -0,0 +1,35 @@
From 5a3eddc9c327dcc20620d8ae47b27f5085811c7e Mon Sep 17 00:00:00 2001
From: Kamil Dudka <kdudka@redhat.com>
Date: Fri, 3 Jun 2016 11:26:20 +0200
Subject: [PATCH] tool_urlglob: fix off-by-one error in glob_parse()
... causing SIGSEGV while parsing URL with too many globs.
Minimal example:
$ curl $(for i in $(seq 101); do printf '{a}'; done)
Reported-by: Romain Coltel
Bug: https://bugzilla.redhat.com/1340757
Upstream-commit: 584d0121c353ed855115c39f6cbc009854018029
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
src/tool_urlglob.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/tool_urlglob.c b/src/tool_urlglob.c
index 70d17fe..a357b8b 100644
--- a/src/tool_urlglob.c
+++ b/src/tool_urlglob.c
@@ -400,7 +400,7 @@ static CURLcode glob_parse(URLGlob *glob, char *pattern,
}
}
- if(++glob->size > GLOB_PATTERN_NUM)
+ if(++glob->size >= GLOB_PATTERN_NUM)
return GLOBERROR("too many globs", pos, CURLE_URL_MALFORMAT);
}
return res;
--
2.5.5

View File

@ -0,0 +1,34 @@
From 31c621ee6dcc793cf3b11e4c062f396d3bdfb503 Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Sun, 31 Jul 2016 01:09:04 +0200
Subject: [PATCH] curl_multi_cleanup: clear connection pointer for easy handles
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
CVE-2016-5421
Bug: https://curl.haxx.se/docs/adv_20160803C.html
Reported-by: Marcelo Echeverria and Fernando Muñoz
Upstream-commit: 75dc096e01ef1e21b6c57690d99371dedb2c0b80
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/multi.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/multi.c b/lib/multi.c
index b63f8bf..3ff5e86 100644
--- a/lib/multi.c
+++ b/lib/multi.c
@@ -1841,6 +1841,8 @@ static void close_all_connections(struct Curl_multi *multi)
conn->data = multi->closure_handle;
sigpipe_ignore(conn->data, &pipe_st);
+ conn->data->easy_conn = NULL; /* clear the easy handle's connection
+ pointer */
/* This will remove the connection from the cache */
(void)Curl_disconnect(conn, FALSE);
sigpipe_restore(&pipe_st);
--
2.5.5

View File

@ -0,0 +1,73 @@
From 419fc844f483eefd4843a4c1ca30e8187923454a Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Fri, 1 Jul 2016 13:32:31 +0200
Subject: [PATCH] TLS: switch off SSL session id when client cert is used
CVE-2016-5419
Bug: https://curl.haxx.se/docs/adv_20160803A.html
Reported-by: Bru Rom
Contributions-by: Eric Rescorla and Ray Satiro
Upstream-commit: 247d890da88f9ee817079e246c59f3d7d12fde5f
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/url.c | 1 +
lib/urldata.h | 1 +
lib/vtls/vtls.c | 10 ++++++++++
3 files changed, 12 insertions(+)
diff --git a/lib/url.c b/lib/url.c
index f32c8cf..be9cbea 100644
--- a/lib/url.c
+++ b/lib/url.c
@@ -5691,6 +5691,7 @@ static CURLcode create_conn(struct SessionHandle *data,
data->set.ssl.random_file = data->set.str[STRING_SSL_RANDOM_FILE];
data->set.ssl.egdsocket = data->set.str[STRING_SSL_EGDSOCKET];
data->set.ssl.cipher_list = data->set.str[STRING_SSL_CIPHER_LIST];
+ data->set.ssl.clientcert = data->set.str[STRING_CERT];
#ifdef USE_TLS_SRP
data->set.ssl.username = data->set.str[STRING_TLSAUTH_USERNAME];
data->set.ssl.password = data->set.str[STRING_TLSAUTH_PASSWORD];
diff --git a/lib/urldata.h b/lib/urldata.h
index 05bda79..3abece7 100644
--- a/lib/urldata.h
+++ b/lib/urldata.h
@@ -346,6 +346,7 @@ struct ssl_config_data {
char *CAfile; /* certificate to verify peer against */
const char *CRLfile; /* CRL to check certificate revocation */
const char *issuercert;/* optional issuer certificate filename */
+ char *clientcert;
char *random_file; /* path to file containing "random" data */
char *egdsocket; /* path to file containing the EGD daemon socket */
char *cipher_list; /* list of ciphers to use */
diff --git a/lib/vtls/vtls.c b/lib/vtls/vtls.c
index 42a2b58..879918b 100644
--- a/lib/vtls/vtls.c
+++ b/lib/vtls/vtls.c
@@ -156,6 +156,15 @@ Curl_clone_ssl_config(struct ssl_config_data *source,
else
dest->random_file = NULL;
+ if(source->clientcert) {
+ dest->clientcert = strdup(source->clientcert);
+ if(!dest->clientcert)
+ return FALSE;
+ dest->sessionid = FALSE;
+ }
+ else
+ dest->clientcert = NULL;
+
return TRUE;
}
@@ -166,6 +175,7 @@ void Curl_free_ssl_config(struct ssl_config_data* sslc)
Curl_safefree(sslc->cipher_list);
Curl_safefree(sslc->egdsocket);
Curl_safefree(sslc->random_file);
+ Curl_safefree(sslc->clientcert);
}
--
2.5.5

View File

@ -0,0 +1,75 @@
From 871472d6249864f8e91031045833349032caca74 Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Sun, 31 Jul 2016 00:51:48 +0200
Subject: [PATCH 1/2] TLS: only reuse connections with the same client cert
CVE-2016-5420
Bug: https://curl.haxx.se/docs/adv_20160803B.html
Upstream-commit: 11ec5ad4352bba384404c56e77c7fab9382fd22d
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/vtls/vtls.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/vtls/vtls.c b/lib/vtls/vtls.c
index 879918b..08e2405 100644
--- a/lib/vtls/vtls.c
+++ b/lib/vtls/vtls.c
@@ -99,6 +99,7 @@ Curl_ssl_config_matches(struct ssl_config_data* data,
(data->verifyhost == needle->verifyhost) &&
safe_strequal(data->CApath, needle->CApath) &&
safe_strequal(data->CAfile, needle->CAfile) &&
+ safe_strequal(data->clientcert, needle->clientcert) &&
safe_strequal(data->random_file, needle->random_file) &&
safe_strequal(data->egdsocket, needle->egdsocket) &&
safe_strequal(data->cipher_list, needle->cipher_list))
--
2.5.5
From 2430e5ed89222f09e6042c9da89472a4e54b0af7 Mon Sep 17 00:00:00 2001
From: Kamil Dudka <kdudka@redhat.com>
Date: Mon, 22 Aug 2016 10:24:35 +0200
Subject: [PATCH 2/2] nss: refuse previously loaded certificate from file
... when we are not asked to use a certificate from file
Upstream-commit: 7700fcba64bf5806de28f6c1c7da3b4f0b38567d
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/vtls/nss.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/lib/vtls/nss.c b/lib/vtls/nss.c
index 722ea88..35fa50d 100644
--- a/lib/vtls/nss.c
+++ b/lib/vtls/nss.c
@@ -1005,10 +1005,10 @@ static SECStatus SelectClientCert(void *arg, PRFileDesc *sock,
struct ssl_connect_data *connssl = (struct ssl_connect_data *)arg;
struct SessionHandle *data = connssl->data;
const char *nickname = connssl->client_nickname;
+ static const char pem_slotname[] = "PEM Token #1";
if(connssl->obj_clicert) {
/* use the cert/key provided by PEM reader */
- static const char pem_slotname[] = "PEM Token #1";
SECItem cert_der = { 0, NULL, 0 };
void *proto_win = SSL_RevealPinArg(sock);
struct CERTCertificateStr *cert;
@@ -1070,6 +1070,12 @@ static SECStatus SelectClientCert(void *arg, PRFileDesc *sock,
if(NULL == nickname)
nickname = "[unknown]";
+ if(!strncmp(nickname, pem_slotname, sizeof(pem_slotname) - 1U)) {
+ failf(data, "NSS: refusing previously loaded certificate from file: %s",
+ nickname);
+ return SECFailure;
+ }
+
if(NULL == *pRetKey) {
failf(data, "NSS: private key not found for certificate: %s", nickname);
return SECFailure;
--
2.7.4

View File

@ -0,0 +1,97 @@
From 5812a71c283936b85a77bd2745d4c6bb673cb55f Mon Sep 17 00:00:00 2001
From: Peter Wang <novalazy@gmail.com>
Date: Fri, 26 Aug 2016 16:28:39 +1000
Subject: [PATCH] nss: work around race condition in PK11_FindSlotByName()
Serialise the call to PK11_FindSlotByName() to avoid spurious errors in
a multi-threaded environment. The underlying cause is a race condition
in nssSlot_IsTokenPresent().
Bug: https://bugzilla.mozilla.org/1297397
Closes #985
Upstream-commit: 3a5d5de9ef52ebe8ca2bda2165edc1b34c242e54
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/vtls/nss.c | 22 +++++++++++++++++++---
1 file changed, 19 insertions(+), 3 deletions(-)
diff --git a/lib/vtls/nss.c b/lib/vtls/nss.c
index e467360..1465c03 100644
--- a/lib/vtls/nss.c
+++ b/lib/vtls/nss.c
@@ -81,6 +81,7 @@ PRFileDesc *PR_ImportTCPSocket(PRInt32 osfd);
PRLock * nss_initlock = NULL;
PRLock * nss_crllock = NULL;
+PRLock *nss_findslot_lock = NULL;
struct curl_llist *nss_crl_list = NULL;
NSSInitContext * nss_context = NULL;
@@ -334,6 +335,19 @@ static char* dup_nickname(struct SessionHandle *data, enum dupstring cert_kind)
return NULL;
}
+/* Lock/unlock wrapper for PK11_FindSlotByName() to work around race condition
+ * in nssSlot_IsTokenPresent() causing spurious SEC_ERROR_NO_TOKEN. For more
+ * details, go to <https://bugzilla.mozilla.org/1297397>.
+ */
+static PK11SlotInfo* nss_find_slot_by_name(const char *slot_name)
+{
+ PK11SlotInfo *slot;
+ PR_Lock(nss_initlock);
+ slot = PK11_FindSlotByName(slot_name);
+ PR_Unlock(nss_initlock);
+ return slot;
+}
+
/* Call PK11_CreateGenericObject() with the given obj_class and filename. If
* the call succeeds, append the object handle to the list of objects so that
* the object can be destroyed in Curl_nss_close(). */
@@ -356,7 +370,7 @@ static CURLcode nss_create_object(struct ssl_connect_data *ssl,
if(!slot_name)
return CURLE_OUT_OF_MEMORY;
- slot = PK11_FindSlotByName(slot_name);
+ slot = nss_find_slot_by_name(slot_name);
free(slot_name);
if(!slot)
return result;
@@ -557,7 +571,7 @@ static CURLcode nss_load_key(struct connectdata *conn, int sockindex,
return result;
}
- slot = PK11_FindSlotByName("PEM Token #1");
+ slot = nss_find_slot_by_name("PEM Token #1");
if(!slot)
return CURLE_SSL_CERTPROBLEM;
@@ -1014,7 +1028,7 @@ static SECStatus SelectClientCert(void *arg, PRFileDesc *sock,
struct CERTCertificateStr *cert;
struct SECKEYPrivateKeyStr *key;
- PK11SlotInfo *slot = PK11_FindSlotByName(pem_slotname);
+ PK11SlotInfo *slot = nss_find_slot_by_name(pem_slotname);
if(NULL == slot) {
failf(data, "NSS: PK11 slot not found: %s", pem_slotname);
return SECFailure;
@@ -1250,6 +1264,7 @@ int Curl_nss_init(void)
PR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 256);
nss_initlock = PR_NewLock();
nss_crllock = PR_NewLock();
+ nss_findslot_lock = PR_NewLock();
}
/* We will actually initialize NSS later */
@@ -1304,6 +1319,7 @@ void Curl_nss_cleanup(void)
PR_DestroyLock(nss_initlock);
PR_DestroyLock(nss_crllock);
+ PR_DestroyLock(nss_findslot_lock);
nss_initlock = NULL;
initialized = 0;
--
2.7.4

View File

@ -0,0 +1,94 @@
From 7959c5713bbec03c9284a14b1fdd7379520199bc Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Thu, 8 Sep 2016 22:59:54 +0200
Subject: [PATCH 1/2] curl_easy_escape: deny negative string lengths as input
CVE-2016-7167
Bug: https://curl.haxx.se/docs/adv_20160914.html
Upstream-commit: 826a9ced2bed217155e34065ef4048931f327b1e
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/escape.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/lib/escape.c b/lib/escape.c
index 40338a9..c6aa3b9 100644
--- a/lib/escape.c
+++ b/lib/escape.c
@@ -78,15 +78,21 @@ char *curl_unescape(const char *string, int length)
char *curl_easy_escape(CURL *handle, const char *string, int inlength)
{
- size_t alloc = (inlength?(size_t)inlength:strlen(string))+1;
+ size_t alloc;
char *ns;
char *testing_ptr = NULL;
unsigned char in; /* we need to treat the characters unsigned */
- size_t newlen = alloc;
+ size_t newlen;
size_t strindex=0;
size_t length;
CURLcode result;
+ if(inlength < 0)
+ return NULL;
+
+ alloc = (inlength?(size_t)inlength:strlen(string))+1;
+ newlen = alloc;
+
ns = malloc(alloc);
if(!ns)
return NULL;
--
2.7.4
From 6a280152e3893938e5d26f5d535613eefab80b5a Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Tue, 13 Sep 2016 23:00:50 +0200
Subject: [PATCH 2/2] curl_easy_unescape: deny negative string lengths as input
CVE-2016-7167
Bug: https://curl.haxx.se/docs/adv_20160914.html
Upstream-commit: 01cf1308ee2e792c77bb1d2c9218c56a30fd40ae
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/escape.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/lib/escape.c b/lib/escape.c
index c6aa3b9..808ac6c 100644
--- a/lib/escape.c
+++ b/lib/escape.c
@@ -217,14 +217,16 @@ char *curl_easy_unescape(CURL *handle, const char *string, int length,
int *olen)
{
char *str = NULL;
- size_t inputlen = length;
- size_t outputlen;
- CURLcode res = Curl_urldecode(handle, string, inputlen, &str, &outputlen,
- FALSE);
- if(res)
- return NULL;
- if(olen)
- *olen = curlx_uztosi(outputlen);
+ if(length >= 0) {
+ size_t inputlen = length;
+ size_t outputlen;
+ CURLcode res = Curl_urldecode(handle, string, inputlen, &str, &outputlen,
+ FALSE);
+ if(res)
+ return NULL;
+ if(olen)
+ *olen = curlx_uztosi(outputlen);
+ }
return str;
}
--
2.7.4

View File

@ -1,12 +1,48 @@
Summary: A utility for getting files from remote servers (FTP, HTTP, and others)
Name: curl
Version: 7.43.0
Release: 1%{?dist}
Release: 10%{?dist}
License: MIT
Group: Applications/Internet
Source: http://curl.haxx.se/download/%{name}-%{version}.tar.lzma
Source2: curlbuild.h
# prevent dnf from crashing when using both FTP and HTTP (#1248389)
Patch1: 0001-curl-7.43.0-f7dcc7c1.patch
# prevent test46 from failing due to expired cookie
Patch2: 0002-curl-7.43.0-002d58f1.patch
# prevent NSS from incorrectly re-using a session (#1104597)
Patch3: 0003-curl-7.43.0-958d2ffb.patch
# match credentials when re-using a proxy connection (CVE-2016-0755)
Patch4: 0004-curl-7.43.0-CVE-2016-0755.patch
# cookie: fix bug in export if any-domain cookie is present (#1311907)
Patch5: 0005-curl-7.43.0-ef0fdb83.patch
# tests/sshserver.pl: use RSA instead of DSA for host auth
Patch6: 0006-curl-7.43.0-effa575f.patch
# fix SIGSEGV of the curl tool while parsing URL with too many globs (#1340757)
Patch7: 0007-curl-7.49.1-urlglob.patch
# fix use of connection struct after free (CVE-2016-5421)
Patch8: 0008-curl-7.47.1-CVE-2016-5421.patch
# fix TLS session resumption client cert bypass (CVE-2016-5419)
Patch9: 0009-curl-7.47.1-CVE-2016-5419.patch
# fix re-using connections with wrong client cert (CVE-2016-5420)
Patch10: 0010-curl-7.47.1-CVE-2016-5420.patch
# work around race condition in PK11_FindSlotByName()
Patch11: 0011-curl-7.47.1-find-slot-race.patch
# reject negative string lengths in curl_easy_[un]escape() (CVE-2016-7167)
Patch12: 0012-curl-7.47.1-CVE-2016-7167.patch
# patch making libcurl multilib ready
Patch101: 0101-curl-7.32.0-multilib.patch
@ -26,6 +62,7 @@ BuildRequires: groff
BuildRequires: krb5-devel
BuildRequires: libidn-devel
BuildRequires: libmetalink-devel
BuildRequires: libnghttp2-devel
BuildRequires: libssh2-devel
BuildRequires: nss-devel
BuildRequires: openldap-devel
@ -51,7 +88,12 @@ BuildRequires: perl(Time::HiRes)
BuildRequires: perl(warnings)
BuildRequires: perl(vars)
# require valgrind to boost test coverage on i386 and x86_64
# The test-suite runs automatically trough valgrind if valgrind is available
# on the system. By not installing valgrind into mock's chroot, we disable
# this feature for production builds on architectures where valgrind is known
# to be less reliable, in order to avoid unnecessary build failures (see RHBZ
# #810992, #816175, and #886891). Nevertheless developers are free to install
# valgrind manually to improve test coverage on any architecture.
%ifarch %{ix86} x86_64
BuildRequires: valgrind
%endif
@ -111,6 +153,18 @@ documentation of the library, too.
%setup -q
# upstream patches
%patch1 -p1
%patch2 -p1
%patch3 -p1
%patch4 -p1
%patch5 -p1
%patch6 -p1
%patch7 -p1
%patch8 -p1
%patch9 -p1
%patch10 -p1
%patch11 -p1
%patch12 -p1
# Fedora patches
%patch101 -p1
@ -125,8 +179,9 @@ cd tests/data/
sed -i s/899\\\([0-9]\\\)/%{?__isa_bits}9\\1/ test{309,1028,1055,1056}
cd -
# disable test 1112 (#565305)
printf "1112\n" >> tests/data/DISABLED
# disable test 1112 (#565305) and test 1801
# <https://github.com/bagder/curl/commit/21e82bd6#commitcomment-12226582>
printf "1112\n1801\n" >> tests/data/DISABLED
# disable test 1319 on ppc64 (server times out)
%ifarch ppc64
@ -146,6 +201,7 @@ echo "1319" >> tests/data/DISABLED
--with-libidn \
--with-libmetalink \
--with-libssh2 \
--with-nghttp2 \
--without-ssl --with-nss
# --enable-debug
# use ^^^ to turn off optimizations, etc.
@ -228,6 +284,38 @@ rm -rf $RPM_BUILD_ROOT
%{_datadir}/aclocal/libcurl.m4
%changelog
* Wed Sep 14 2016 Kamil Dudka <kdudka@redhat.com> 7.43.0-10
- reject negative string lengths in curl_easy_[un]escape() (CVE-2016-7167)
* Fri Aug 26 2016 Kamil Dudka <kdudka@redhat.com> 7.43.0-9
- work around race condition in PK11_FindSlotByName()
- fix incorrect use of a previously loaded certificate from file
(related to CVE-2016-5420)
* Wed Aug 03 2016 Kamil Dudka <kdudka@redhat.com> 7.43.0-8
- fix re-using connections with wrong client cert (CVE-2016-5420)
- fix TLS session resumption client cert bypass (CVE-2016-5419)
- fix use of connection struct after free (CVE-2016-5421)
* Fri Jun 03 2016 Kamil Dudka <kdudka@redhat.com> 7.43.0-7
- fix SIGSEGV of the curl tool while parsing URL with too many globs (#1340757)
* Thu Feb 25 2016 Kamil Dudka <kdudka@redhat.com> 7.43.0-6
- cookie: fix bug in export if any-domain cookie is present (#1311907)
* Wed Jan 27 2016 Kamil Dudka <kdudka@redhat.com> 7.43.0-5
- match credentials when re-using a proxy connection (CVE-2016-0755)
* Fri Sep 18 2015 Kamil Dudka <kdudka@redhat.com> 7.43.0-4
- prevent NSS from incorrectly re-using a session (#1104597)
* Thu Aug 27 2015 Kamil Dudka <kdudka@redhat.com> 7.43.0-3
- prevent test46 from failing due to expired cookie
* Thu Jul 30 2015 Kamil Dudka <kdudka@redhat.com> 7.43.0-2
- prevent dnf from crashing when using both FTP and HTTP (#1248389)
- build support for the HTTP/2 protocol
* Wed Jun 17 2015 Kamil Dudka <kdudka@redhat.com> 7.43.0-1
- new upstream release (fixes CVE-2015-3236 and CVE-2015-3237)