Resolves: #1697566 - rebase to 7.65.3 to fix crashes of gnome and flatpak

This commit is contained in:
Kamil Dudka 2019-07-22 16:01:48 +02:00
parent 0c07534eed
commit fa1eecb64d
15 changed files with 42 additions and 1009 deletions

View File

@ -1,76 +0,0 @@
From 082034e2334b2d0795b2b324ff3e0635bb7d2b86 Mon Sep 17 00:00:00 2001
From: Alessandro Ghedini <alessandro@ghedini.me>
Date: Tue, 5 Feb 2019 20:44:14 +0000
Subject: [PATCH 1/2] zsh.pl: update regex to better match curl -h output
The current regex fails to match '<...>' arguments properly (e.g. those
with spaces in them), which causes an completion script with wrong
descriptions for some options.
The problem can be reproduced as follows:
% curl --reso<TAB>
Upstream-commit: dbd32f3241b297b96ee11a51da1a661f528ca026
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
scripts/zsh.pl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/zsh.pl b/scripts/zsh.pl
index 1257190..941b322 100755
--- a/scripts/zsh.pl
+++ b/scripts/zsh.pl
@@ -7,7 +7,7 @@ use warnings;
my $curl = $ARGV[0] || 'curl';
-my $regex = '\s+(?:(-[^\s]+),\s)?(--[^\s]+)\s([^\s.]+)?\s+(.*)';
+my $regex = '\s+(?:(-[^\s]+),\s)?(--[^\s]+)\s*(\<.+?\>)?\s+(.*)';
my @opts = parse_main_opts('--help', $regex);
my $opts_str;
--
2.17.2
From 45abc785e101346f19599aa5f9fa1617e525ec4d Mon Sep 17 00:00:00 2001
From: Alessandro Ghedini <alessandro@ghedini.me>
Date: Tue, 5 Feb 2019 21:06:26 +0000
Subject: [PATCH 2/2] zsh.pl: escape ':' character
':' is interpreted as separator by zsh, so if used as part of the argument
or option's description it needs to be escaped.
The problem can be reproduced as follows:
% curl -E <TAB>
Bug: https://bugs.debian.org/921452
Upstream-commit: b3cc8017b7364f588365be2b2629c49c142efdb7
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
scripts/zsh.pl | 3 +++
1 file changed, 3 insertions(+)
diff --git a/scripts/zsh.pl b/scripts/zsh.pl
index 941b322..0f9cbec 100755
--- a/scripts/zsh.pl
+++ b/scripts/zsh.pl
@@ -45,9 +45,12 @@ sub parse_main_opts {
my $option = '';
+ $arg =~ s/\:/\\\:/g if defined $arg;
+
$desc =~ s/'/'\\''/g if defined $desc;
$desc =~ s/\[/\\\[/g if defined $desc;
$desc =~ s/\]/\\\]/g if defined $desc;
+ $desc =~ s/\:/\\\:/g if defined $desc;
$option .= '{' . trim($short) . ',' if defined $short;
$option .= trim($long) if defined $long;
--
2.17.2

View File

@ -1,162 +0,0 @@
From 377101f138873bfa481785cb7d04c326006f0b5d Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Mon, 11 Feb 2019 07:56:00 +0100
Subject: [PATCH 1/3] connection_check: set ->data to the transfer doing the
check
The http2 code for connection checking needs a transfer to use. Make
sure a working one is set before handler->connection_check() is called.
Reported-by: jnbr on github
Fixes #3541
Closes #3547
Upstream-commit: 38d8e1bd4ed1ae52930ae466ecbac78e888b142f
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/url.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/url.c b/lib/url.c
index d5a9820..229c655 100644
--- a/lib/url.c
+++ b/lib/url.c
@@ -965,6 +965,7 @@ static bool extract_if_dead(struct connectdata *conn,
/* The protocol has a special method for checking the state of the
connection. Use it to check if the connection is dead. */
unsigned int state;
+ conn->data = data; /* use this transfer for now */
state = conn->handler->connection_check(conn, CONNCHECK_ISDEAD);
dead = (state & CONNRESULT_DEAD);
}
--
2.17.2
From 287f5d70395b3833f8901a57b29a48b87d84a9fe Mon Sep 17 00:00:00 2001
From: Jay Satiro <raysatiro@yahoo.com>
Date: Mon, 11 Feb 2019 23:00:00 -0500
Subject: [PATCH 2/3] connection_check: restore original conn->data after the
check
- Save the original conn->data before it's changed to the specified
data transfer for the connection check and then restore it afterwards.
This is a follow-up to 38d8e1b 2019-02-11.
History:
It was discovered a month ago that before checking whether to extract a
dead connection that that connection should be associated with a "live"
transfer for the check (ie original conn->data ignored and set to the
passed in data). A fix was landed in 54b201b which did that and also
cleared conn->data after the check. The original conn->data was not
restored, so presumably it was thought that a valid conn->data was no
longer needed.
Several days later it was discovered that a valid conn->data was needed
after the check and follow-up fix was landed in bbae24c which partially
reverted the original fix and attempted to limit the scope of when
conn->data was changed to only when pruning dead connections. In that
case conn->data was not cleared and the original conn->data not
restored.
A month later it was discovered that the original fix was somewhat
correct; a "live" transfer is needed for the check in all cases
because original conn->data could be null which could cause a bad deref
at arbitrary points in the check. A fix was landed in 38d8e1b which
expanded the scope to all cases. conn->data was not cleared and the
original conn->data not restored.
A day later it was discovered that not restoring the original conn->data
may lead to busy loops in applications that use the event interface, and
given this observation it's a pretty safe assumption that there is some
code path that still needs the original conn->data. This commit is the
follow-up fix for that, it restores the original conn->data after the
connection check.
Assisted-by: tholin@users.noreply.github.com
Reported-by: tholin@users.noreply.github.com
Fixes https://github.com/curl/curl/issues/3542
Closes #3559
Upstream-commit: 4015fae044ce52a639c9358e22a9e948f287c89f
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/url.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lib/url.c b/lib/url.c
index 229c655..a77e92d 100644
--- a/lib/url.c
+++ b/lib/url.c
@@ -965,8 +965,10 @@ static bool extract_if_dead(struct connectdata *conn,
/* The protocol has a special method for checking the state of the
connection. Use it to check if the connection is dead. */
unsigned int state;
+ struct Curl_easy *olddata = conn->data;
conn->data = data; /* use this transfer for now */
state = conn->handler->connection_check(conn, CONNCHECK_ISDEAD);
+ conn->data = olddata;
dead = (state & CONNRESULT_DEAD);
}
else {
@@ -995,7 +997,6 @@ struct prunedead {
static int call_extract_if_dead(struct connectdata *conn, void *param)
{
struct prunedead *p = (struct prunedead *)param;
- conn->data = p->data; /* transfer to use for this check */
if(extract_if_dead(conn, p->data)) {
/* stop the iteration here, pass back the connection that was extracted */
p->extracted = conn;
--
2.17.2
From 15e3f2eef87bff1210f43921cb15f03c68be59f7 Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Tue, 19 Feb 2019 15:56:54 +0100
Subject: [PATCH 3/3] singlesocket: fix the 'sincebefore' placement
The variable wasn't properly reset within the loop and thus could remain
set for sockets that hadn't been set before and miss notifying the app.
This is a follow-up to 4c35574 (shipped in curl 7.64.0)
Reported-by: buzo-ffm on github
Detected-by: Jan Alexander Steffens
Fixes #3585
Closes #3589
Upstream-commit: afc00e047c773faeaa60a5f86a246cbbeeba5819
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/multi.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/multi.c b/lib/multi.c
index 130226f..28f4c47 100644
--- a/lib/multi.c
+++ b/lib/multi.c
@@ -2360,8 +2360,6 @@ static CURLMcode singlesocket(struct Curl_multi *multi,
int num;
unsigned int curraction;
int actions[MAX_SOCKSPEREASYHANDLE];
- unsigned int comboaction;
- bool sincebefore = FALSE;
for(i = 0; i< MAX_SOCKSPEREASYHANDLE; i++)
socks[i] = CURL_SOCKET_BAD;
@@ -2380,6 +2378,8 @@ static CURLMcode singlesocket(struct Curl_multi *multi,
i++) {
unsigned int action = CURL_POLL_NONE;
unsigned int prevaction = 0;
+ unsigned int comboaction;
+ bool sincebefore = FALSE;
s = socks[i];
--
2.17.2

View File

@ -1,42 +0,0 @@
From d73dc8d3e70bde0ef999ecf7bcd5585b9892371c Mon Sep 17 00:00:00 2001
From: Michael Wallner <mike@php.net>
Date: Mon, 25 Feb 2019 19:05:02 +0100
Subject: [PATCH] cookies: fix NULL dereference if flushing cookies with no
CookieInfo set
Regression brought by a52e46f3900fb0 (shipped in 7.63.0)
Closes #3613
Upstream-commit: 8eddb8f4259193633cfc95a42603958a89b31de5
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/cookie.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/lib/cookie.c b/lib/cookie.c
index 4fb992a..d535170 100644
--- a/lib/cookie.c
+++ b/lib/cookie.c
@@ -1504,7 +1504,8 @@ static int cookie_output(struct CookieInfo *c, const char *dumphere)
struct Cookie **array;
/* at first, remove expired cookies */
- remove_expired(c);
+ if(c)
+ remove_expired(c);
if(!strcmp("-", dumphere)) {
/* use stdout */
@@ -1523,7 +1524,7 @@ static int cookie_output(struct CookieInfo *c, const char *dumphere)
"# This file was generated by libcurl! Edit at your own risk.\n\n",
out);
- if(c->numcookies) {
+ if(c && c->numcookies) {
array = malloc(sizeof(struct Cookie *) * c->numcookies);
if(!array) {
if(!use_stdout)
--
2.17.2

View File

@ -1,118 +0,0 @@
From 5ddabe85b2e3e4fd08d06980719d71a2aed77a5b Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Thu, 28 Feb 2019 20:34:36 +0100
Subject: [PATCH] threaded-resolver: shutdown the resolver thread without error
message
When a transfer is done, the resolver thread will be brought down. That
could accidentally generate an error message in the error buffer even
though this is not an error situationand the transfer would still return
OK. An application that still reads the error buffer could find a
"Could not resolve host: [host name]" message there and get confused.
Reported-by: Michael Schmid
Fixes #3629
Closes #3630
Upstream-commit: 754ae103989a6ad0869d23a6a427d652b5b4a2fe
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/asyn-thread.c | 68 ++++++++++++++++++++++++++---------------------
1 file changed, 38 insertions(+), 30 deletions(-)
diff --git a/lib/asyn-thread.c b/lib/asyn-thread.c
index a9679d0..55e0811 100644
--- a/lib/asyn-thread.c
+++ b/lib/asyn-thread.c
@@ -461,6 +461,42 @@ static CURLcode resolver_error(struct connectdata *conn)
return result;
}
+static CURLcode thread_wait_resolv(struct connectdata *conn,
+ struct Curl_dns_entry **entry,
+ bool report)
+{
+ struct thread_data *td = (struct thread_data*) conn->async.os_specific;
+ CURLcode result = CURLE_OK;
+
+ DEBUGASSERT(conn && td);
+ DEBUGASSERT(td->thread_hnd != curl_thread_t_null);
+
+ /* wait for the thread to resolve the name */
+ if(Curl_thread_join(&td->thread_hnd)) {
+ if(entry)
+ result = getaddrinfo_complete(conn);
+ }
+ else
+ DEBUGASSERT(0);
+
+ conn->async.done = TRUE;
+
+ if(entry)
+ *entry = conn->async.dns;
+
+ if(!conn->async.dns && report)
+ /* a name was not resolved, report error */
+ result = resolver_error(conn);
+
+ destroy_async_data(&conn->async);
+
+ if(!conn->async.dns && report)
+ connclose(conn, "asynch resolve failed");
+
+ return result;
+}
+
+
/*
* Until we gain a way to signal the resolver threads to stop early, we must
* simply wait for them and ignore their results.
@@ -473,7 +509,7 @@ void Curl_resolver_kill(struct connectdata *conn)
unfortunately. Otherwise, we can simply cancel to clean up any resolver
data. */
if(td && td->thread_hnd != curl_thread_t_null)
- (void)Curl_resolver_wait_resolv(conn, NULL);
+ (void)thread_wait_resolv(conn, NULL, FALSE);
else
Curl_resolver_cancel(conn);
}
@@ -494,35 +530,7 @@ void Curl_resolver_kill(struct connectdata *conn)
CURLcode Curl_resolver_wait_resolv(struct connectdata *conn,
struct Curl_dns_entry **entry)
{
- struct thread_data *td = (struct thread_data*) conn->async.os_specific;
- CURLcode result = CURLE_OK;
-
- DEBUGASSERT(conn && td);
- DEBUGASSERT(td->thread_hnd != curl_thread_t_null);
-
- /* wait for the thread to resolve the name */
- if(Curl_thread_join(&td->thread_hnd)) {
- if(entry)
- result = getaddrinfo_complete(conn);
- }
- else
- DEBUGASSERT(0);
-
- conn->async.done = TRUE;
-
- if(entry)
- *entry = conn->async.dns;
-
- if(!conn->async.dns)
- /* a name was not resolved, report error */
- result = resolver_error(conn);
-
- destroy_async_data(&conn->async);
-
- if(!conn->async.dns)
- connclose(conn, "asynch resolve failed");
-
- return result;
+ return thread_wait_resolv(conn, entry, TRUE);
}
/*
--
2.17.2

View File

@ -1,32 +0,0 @@
From 2e8f4d01cdd07779e0582257cb6b53c5a91d6504 Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Mon, 11 Feb 2019 22:57:33 +0100
Subject: [PATCH] multi: remove verbose "Expire in" ... messages
Reported-by: James Brown
Bug: https://curl.haxx.se/mail/archive-2019-02/0013.html
Closes #3558
Upstream-commit: aabc7ae5ecf70973add429b5acbc86d6a57e4da5
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/multi.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/lib/multi.c b/lib/multi.c
index 28f4c47..856cc22 100644
--- a/lib/multi.c
+++ b/lib/multi.c
@@ -3028,9 +3028,6 @@ void Curl_expire(struct Curl_easy *data, time_t milli, expire_id id)
DEBUGASSERT(id < EXPIRE_LAST);
- infof(data, "Expire in %ld ms for %x (transfer %p)\n",
- (long)milli, id, data);
-
set = Curl_now();
set.tv_sec += milli/1000;
set.tv_usec += (unsigned int)(milli%1000)*1000;
--
2.17.2

View File

@ -1,267 +0,0 @@
From 1202a02142791b453110c8b922cb57c0b11380ce Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Mon, 29 Apr 2019 08:00:49 +0200
Subject: [PATCH] CURL_MAX_INPUT_LENGTH: largest acceptable string input size
This limits all accepted input strings passed to libcurl to be less than
CURL_MAX_INPUT_LENGTH (8000000) bytes, for these API calls:
curl_easy_setopt() and curl_url_set().
The 8000000 number is arbitrary picked and is meant to detect mistakes
or abuse, not to limit actual practical use cases. By limiting the
acceptable string lengths we also reduce the risk of integer overflows
all over.
NOTE: This does not apply to `CURLOPT_POSTFIELDS`.
Test 1559 verifies.
Closes #3805
Upstream-commit: 5fc28510a4664f46459d9a40187d81cc08571e60
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/setopt.c | 7 ++++
lib/urlapi.c | 8 ++++
lib/urldata.h | 4 ++
tests/data/Makefile.inc | 2 +-
tests/data/test1559 | 44 +++++++++++++++++++++
tests/libtest/Makefile.inc | 6 ++-
tests/libtest/lib1559.c | 78 ++++++++++++++++++++++++++++++++++++++
7 files changed, 146 insertions(+), 3 deletions(-)
create mode 100644 tests/data/test1559
create mode 100644 tests/libtest/lib1559.c
diff --git a/lib/setopt.c b/lib/setopt.c
index d98ca66..95e9fcb 100644
--- a/lib/setopt.c
+++ b/lib/setopt.c
@@ -60,6 +60,13 @@ CURLcode Curl_setstropt(char **charp, const char *s)
if(s) {
char *str = strdup(s);
+ if(str) {
+ size_t len = strlen(str);
+ if(len > CURL_MAX_INPUT_LENGTH) {
+ free(str);
+ return CURLE_BAD_FUNCTION_ARGUMENT;
+ }
+ }
if(!str)
return CURLE_OUT_OF_MEMORY;
diff --git a/lib/urlapi.c b/lib/urlapi.c
index 3af8e93..39af964 100644
--- a/lib/urlapi.c
+++ b/lib/urlapi.c
@@ -648,6 +648,10 @@ static CURLUcode seturl(const char *url, CURLU *u, unsigned int flags)
************************************************************/
/* allocate scratch area */
urllen = strlen(url);
+ if(urllen > CURL_MAX_INPUT_LENGTH)
+ /* excessive input length */
+ return CURLUE_MALFORMED_INPUT;
+
path = u->scratch = malloc(urllen * 2 + 2);
if(!path)
return CURLUE_OUT_OF_MEMORY;
@@ -1278,6 +1282,10 @@ CURLUcode curl_url_set(CURLU *u, CURLUPart what,
const char *newp = part;
size_t nalloc = strlen(part);
+ if(nalloc > CURL_MAX_INPUT_LENGTH)
+ /* excessive input length */
+ return CURLUE_MALFORMED_INPUT;
+
if(urlencode) {
const char *i;
char *o;
diff --git a/lib/urldata.h b/lib/urldata.h
index ff3cc9a..d4a5ad8 100644
--- a/lib/urldata.h
+++ b/lib/urldata.h
@@ -79,6 +79,10 @@
*/
#define RESP_TIMEOUT (120*1000)
+/* Max string intput length is a precaution against abuse and to detect junk
+ input easier and better. */
+#define CURL_MAX_INPUT_LENGTH 8000000
+
#include "cookie.h"
#include "psl.h"
#include "formdata.h"
diff --git a/tests/data/Makefile.inc b/tests/data/Makefile.inc
index 3d13e3a..9ae1c6b 100644
--- a/tests/data/Makefile.inc
+++ b/tests/data/Makefile.inc
@@ -176,7 +176,7 @@ test1525 test1526 test1527 test1528 test1529 test1530 test1531 test1532 \
test1533 test1534 test1535 test1536 test1537 test1538 \
test1540 \
test1550 test1551 test1552 test1553 test1554 test1555 test1556 test1557 \
-test1558 test1560 test1561 test1562 \
+test1558 test1559 test1560 test1561 test1562 \
\
test1590 test1591 test1592 \
\
diff --git a/tests/data/test1559 b/tests/data/test1559
new file mode 100644
index 0000000..cbed6fb
--- /dev/null
+++ b/tests/data/test1559
@@ -0,0 +1,44 @@
+<testcase>
+<info>
+<keywords>
+CURLOPT_URL
+</keywords>
+</info>
+
+<reply>
+</reply>
+
+<client>
+<server>
+none
+</server>
+
+# require HTTP so that CURLOPT_POSTFIELDS works as assumed
+<features>
+http
+</features>
+<tool>
+lib1559
+</tool>
+
+<name>
+Set excessive URL lengths
+</name>
+</client>
+
+#
+# Verify that the test runs to completion without crashing
+<verify>
+<errorcode>
+0
+</errorcode>
+<stdout>
+CURLOPT_URL 10000000 bytes URL == 43
+CURLOPT_POSTFIELDS 10000000 bytes data == 0
+CURLUPART_URL 10000000 bytes URL == 3
+CURLUPART_SCHEME 10000000 bytes scheme == 3
+CURLUPART_USER 10000000 bytes user == 3
+</stdout>
+</verify>
+
+</testcase>
diff --git a/tests/libtest/Makefile.inc b/tests/libtest/Makefile.inc
index 9270822..62e6c20 100644
--- a/tests/libtest/Makefile.inc
+++ b/tests/libtest/Makefile.inc
@@ -30,8 +30,7 @@ noinst_PROGRAMS = chkhostname libauthretry libntlmconnect \
lib1534 lib1535 lib1536 lib1537 lib1538 \
lib1540 \
lib1550 lib1551 lib1552 lib1553 lib1554 lib1555 lib1556 lib1557 \
- lib1558 \
- lib1560 \
+ lib1558 lib1559 lib1560 \
lib1591 lib1592 \
lib1900 \
lib2033
@@ -520,6 +519,9 @@ lib1557_CPPFLAGS = $(AM_CPPFLAGS) -DLIB1557
lib1558_SOURCES = lib1558.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
lib1558_LDADD = $(TESTUTIL_LIBS)
+lib1559_SOURCES = lib1559.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
+lib1559_LDADD = $(TESTUTIL_LIBS)
+
lib1560_SOURCES = lib1560.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
lib1560_CFLAGS = $(AM_CFLAGS) -fno-builtin-strcmp
lib1560_LDADD = $(TESTUTIL_LIBS)
diff --git a/tests/libtest/lib1559.c b/tests/libtest/lib1559.c
new file mode 100644
index 0000000..2aa3615
--- /dev/null
+++ b/tests/libtest/lib1559.c
@@ -0,0 +1,78 @@
+/***************************************************************************
+ * _ _ ____ _
+ * Project ___| | | | _ \| |
+ * / __| | | | |_) | |
+ * | (__| |_| | _ <| |___
+ * \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+#include "test.h"
+
+#include "testutil.h"
+#include "warnless.h"
+#include "memdebug.h"
+
+#define EXCESSIVE 10*1000*1000
+int test(char *URL)
+{
+ CURLcode res = 0;
+ CURL *curl = NULL;
+ char *longurl = malloc(EXCESSIVE);
+ CURLU *u;
+ (void)URL;
+
+ memset(longurl, 'a', EXCESSIVE);
+ longurl[EXCESSIVE-1] = 0;
+
+ global_init(CURL_GLOBAL_ALL);
+ easy_init(curl);
+
+ res = curl_easy_setopt(curl, CURLOPT_URL, longurl);
+ printf("CURLOPT_URL %d bytes URL == %d\n",
+ EXCESSIVE, (int)res);
+
+ res = curl_easy_setopt(curl, CURLOPT_POSTFIELDS, longurl);
+ printf("CURLOPT_POSTFIELDS %d bytes data == %d\n",
+ EXCESSIVE, (int)res);
+
+ u = curl_url();
+ if(u) {
+ CURLUcode uc = curl_url_set(u, CURLUPART_URL, longurl, 0);
+ printf("CURLUPART_URL %d bytes URL == %d\n",
+ EXCESSIVE, (int)uc);
+ uc = curl_url_set(u, CURLUPART_SCHEME, longurl, CURLU_NON_SUPPORT_SCHEME);
+ printf("CURLUPART_SCHEME %d bytes scheme == %d\n",
+ EXCESSIVE, (int)uc);
+ uc = curl_url_set(u, CURLUPART_USER, longurl, 0);
+ printf("CURLUPART_USER %d bytes user == %d\n",
+ EXCESSIVE, (int)uc);
+ curl_url_cleanup(u);
+ }
+
+ free(longurl);
+
+ curl_easy_cleanup(curl);
+ curl_global_cleanup();
+
+ return 0;
+
+test_cleanup:
+
+ curl_easy_cleanup(curl);
+ curl_global_cleanup();
+
+ return res; /* return the final return code */
+}
--
2.20.1

View File

@ -1,31 +0,0 @@
From 55a27027d5f024a0ecc2c23c81ed99de6192c9f3 Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Fri, 3 May 2019 22:20:37 +0200
Subject: [PATCH] tftp: use the current blksize for recvfrom()
bug: https://curl.haxx.se/docs/CVE-2019-5436.html
Reported-by: l00p3r on hackerone
CVE-2019-5436
Upstream-commit: 2576003415625d7b5f0e390902f8097830b82275
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/tftp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/tftp.c b/lib/tftp.c
index 269b3cd..4f2a131 100644
--- a/lib/tftp.c
+++ b/lib/tftp.c
@@ -1005,7 +1005,7 @@ static CURLcode tftp_connect(struct connectdata *conn, bool *done)
state->sockfd = state->conn->sock[FIRSTSOCKET];
state->state = TFTP_STATE_START;
state->error = TFTP_ERR_NONE;
- state->blksize = TFTP_BLKSIZE_DEFAULT;
+ state->blksize = blksize;
state->requested_blksize = blksize;
((struct sockaddr *)&state->local_addr)->sa_family =
--
2.20.1

View File

@ -1,221 +0,0 @@
From 42f31adb0386c168682d14bf1e52fbbdfc5fc36d Mon Sep 17 00:00:00 2001
From: Even Rouault <even.rouault@spatialys.com>
Date: Sun, 7 Apr 2019 14:07:35 +0200
Subject: [PATCH 1/2] multi_runsingle(): fix use-after-free
Fixes #3745
Closes #3746
The following snippet
```
int main()
{
CURL* hCurlHandle = curl_easy_init();
curl_easy_setopt(hCurlHandle, CURLOPT_URL, "http://example.com");
curl_easy_setopt(hCurlHandle, CURLOPT_PROXY, "1");
curl_easy_perform(hCurlHandle);
curl_easy_cleanup(hCurlHandle);
return 0;
}
```
triggers the following Valgrind warning
```
==4125== Invalid read of size 8
==4125== at 0x4E7D1EE: Curl_llist_remove (llist.c:97)
==4125== by 0x4E7EF5C: detach_connnection (multi.c:798)
==4125== by 0x4E80545: multi_runsingle (multi.c:1451)
==4125== by 0x4E8197C: curl_multi_perform (multi.c:2072)
==4125== by 0x4E766A0: easy_transfer (easy.c:625)
==4125== by 0x4E76915: easy_perform (easy.c:719)
==4125== by 0x4E7697C: curl_easy_perform (easy.c:738)
==4125== by 0x4008BE: main (in /home/even/curl/test)
==4125== Address 0x9b3d1d0 is 1,120 bytes inside a block of size 1,600 free'd
==4125== at 0x4C2ECF0: free (vg_replace_malloc.c:530)
==4125== by 0x4E62C36: conn_free (url.c:756)
==4125== by 0x4E62D34: Curl_disconnect (url.c:818)
==4125== by 0x4E48DF9: Curl_once_resolved (hostip.c:1097)
==4125== by 0x4E8052D: multi_runsingle (multi.c:1446)
==4125== by 0x4E8197C: curl_multi_perform (multi.c:2072)
==4125== by 0x4E766A0: easy_transfer (easy.c:625)
==4125== by 0x4E76915: easy_perform (easy.c:719)
==4125== by 0x4E7697C: curl_easy_perform (easy.c:738)
==4125== by 0x4008BE: main (in /home/even/curl/test)
==4125== Block was alloc'd at
==4125== at 0x4C2F988: calloc (vg_replace_malloc.c:711)
==4125== by 0x4E6438E: allocate_conn (url.c:1654)
==4125== by 0x4E685B4: create_conn (url.c:3496)
==4125== by 0x4E6968F: Curl_connect (url.c:4023)
==4125== by 0x4E802E7: multi_runsingle (multi.c:1368)
==4125== by 0x4E8197C: curl_multi_perform (multi.c:2072)
==4125== by 0x4E766A0: easy_transfer (easy.c:625)
==4125== by 0x4E76915: easy_perform (easy.c:719)
==4125== by 0x4E7697C: curl_easy_perform (easy.c:738)
==4125== by 0x4008BE: main (in /home/even/curl/test)
```
This has been bisected to commit 2f44e94
Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=14109
Credit to OSS Fuzz
Upstream-commit: 64cbae31078b2b64818a1d793516fbe73a7e4c45
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/multi.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/multi.c b/lib/multi.c
index 856cc22..cad76c1 100644
--- a/lib/multi.c
+++ b/lib/multi.c
@@ -1549,7 +1549,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
if(result)
/* if Curl_once_resolved() returns failure, the connection struct
is already freed and gone */
- Curl_detach_connnection(data); /* no more connection */
+ data->conn = NULL; /* no more connection */
else {
/* call again please so that we get the next socket setup */
rc = CURLM_CALL_MULTI_PERFORM;
--
2.20.1
From 2d79804c0fab1efa594b1739a3e408c4ab031704 Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <daniel@haxx.se>
Date: Tue, 28 May 2019 08:23:43 +0200
Subject: [PATCH 2/2] multi: track users of a socket better
They need to be removed from the socket hash linked list with more care.
When sh_delentry() is called to remove a sockethash entry, remove all
individual transfers from the list first. To enable this, each Curl_easy struct
now stores a pointer to the sockethash entry to know how to remove itself.
Reported-by: Tom van der Woerdt and Kunal Ekawde
Fixes #3952
Fixes #3904
Closes #3953
Upstream-commit: 8581e1928ea8e125021408ec071fcedb0276c7fe
Signed-off-by: Kamil Dudka <kdudka@redhat.com>
---
lib/multi.c | 38 ++++++++++++++++++++++++++------------
lib/urldata.h | 1 +
2 files changed, 27 insertions(+), 12 deletions(-)
diff --git a/lib/multi.c b/lib/multi.c
index cad76c1..467cad8 100644
--- a/lib/multi.c
+++ b/lib/multi.c
@@ -245,8 +245,17 @@ static struct Curl_sh_entry *sh_addentry(struct curl_hash *sh,
/* delete the given socket + handle from the hash */
-static void sh_delentry(struct curl_hash *sh, curl_socket_t s)
+static void sh_delentry(struct Curl_sh_entry *entry,
+ struct curl_hash *sh, curl_socket_t s)
{
+ struct curl_llist *list = &entry->list;
+ struct curl_llist_element *e;
+ /* clear the list of transfers first */
+ for(e = list->head; e; e = list->head) {
+ struct Curl_easy *dta = e->ptr;
+ Curl_llist_remove(&entry->list, e, NULL);
+ dta->sh_entry = NULL;
+ }
/* We remove the hash entry. This will end up in a call to
sh_freeentry(). */
Curl_hash_delete(sh, (char *)&s, sizeof(curl_socket_t));
@@ -806,6 +815,11 @@ bool Curl_pipeline_wanted(const struct Curl_multi *multi, int bits)
occasionally be called with the pointer already cleared. */
void Curl_detach_connnection(struct Curl_easy *data)
{
+ if(data->sh_entry) {
+ /* still listed as a user of a socket hash entry, remove it */
+ Curl_llist_remove(&data->sh_entry->list, &data->sh_queue, NULL);
+ data->sh_entry = NULL;
+ }
data->conn = NULL;
}
@@ -2432,6 +2446,7 @@ static CURLMcode singlesocket(struct Curl_multi *multi,
/* add 'data' to the list of handles using this socket! */
Curl_llist_insert_next(&entry->list, entry->list.tail,
data, &data->sh_queue);
+ data->sh_entry = entry;
}
comboaction = (entry->writers? CURL_POLL_OUT : 0) |
@@ -2491,11 +2506,7 @@ static CURLMcode singlesocket(struct Curl_multi *multi,
multi->socket_cb(data, s, CURL_POLL_REMOVE,
multi->socket_userp,
entry->socketp);
- sh_delentry(&multi->sockhash, s);
- }
- else {
- /* remove this transfer as a user of this socket */
- Curl_llist_remove(&entry->list, &data->sh_queue, NULL);
+ sh_delentry(entry, &multi->sockhash, s);
}
}
} /* for loop over numsocks */
@@ -2539,7 +2550,7 @@ void Curl_multi_closed(struct Curl_easy *data, curl_socket_t s)
entry->socketp);
/* now remove it from the socket hash */
- sh_delentry(&multi->sockhash, s);
+ sh_delentry(entry, &multi->sockhash, s);
}
}
}
@@ -2630,7 +2641,6 @@ static CURLMcode multi_socket(struct Curl_multi *multi,
return result;
}
if(s != CURL_SOCKET_TIMEOUT) {
-
struct Curl_sh_entry *entry = sh_getentry(&multi->sockhash, s);
if(!entry)
@@ -2643,15 +2653,19 @@ static CURLMcode multi_socket(struct Curl_multi *multi,
else {
struct curl_llist *list = &entry->list;
struct curl_llist_element *e;
+ struct curl_llist_element *enext;
SIGPIPE_VARIABLE(pipe_st);
/* the socket can be shared by many transfers, iterate */
- for(e = list->head; e; e = e->next) {
+ for(e = list->head; e; e = enext) {
data = (struct Curl_easy *)e->ptr;
- if(data->magic != CURLEASY_MAGIC_NUMBER)
- /* bad bad bad bad bad bad bad */
- return CURLM_INTERNAL_ERROR;
+ /* assign 'enext' here since the 'e' struct might be cleared
+ further down in the singlesocket() call */
+ enext = e->next;
+
+ DEBUGASSERT(data);
+ DEBUGASSERT(data->magic == CURLEASY_MAGIC_NUMBER);
/* If the pipeline is enabled, take the handle which is in the head of
the pipeline. If we should write into the socket, take the
diff --git a/lib/urldata.h b/lib/urldata.h
index d4a5ad8..c793a6b 100644
--- a/lib/urldata.h
+++ b/lib/urldata.h
@@ -1797,6 +1797,7 @@ struct Curl_easy {
struct curl_llist_element connect_queue;
struct curl_llist_element pipeline_queue;
struct curl_llist_element sh_queue; /* list per Curl_sh_entry */
+ struct Curl_sh_entry *sh_entry; /* the socket hash this was added to */
CURLMstate mstate; /* the handle's state */
CURLcode result; /* previous result */
--
2.20.1

View File

@ -12,7 +12,7 @@ diff --git a/configure b/configure
index 8f079a3..53b4774 100755
--- a/configure
+++ b/configure
@@ -16250,18 +16250,11 @@ $as_echo "yes" >&6; }
@@ -16288,18 +16288,11 @@ $as_echo "yes" >&6; }
gccvhi=`echo $gccver | cut -d . -f1`
gccvlo=`echo $gccver | cut -d . -f2`
compiler_num=`(expr $gccvhi "*" 100 + $gccvlo) 2>/dev/null`

View File

@ -14,8 +14,8 @@ index e441278..b0958b6 100644
+-g "http://%HOST6IP:%HTTP6PORT/1083" --interface localhost6
</command>
<precheck>
-perl -e "if ('%CLIENT6IP' ne '[::1]') {print 'Test requires default test server host address';} else {exec './server/resolve --ipv6 ip6-localhost'; print 'Cannot run precheck resolve';}"
+perl -e "if ('%CLIENT6IP' ne '[::1]') {print 'Test requires default test server host address';} else {exec './server/resolve --ipv6 localhost6'; print 'Cannot run precheck resolve';}"
-perl -e "if ('%CLIENT6IP' ne '[::1]') {print 'Test requires default test client host address';} else {exec './server/resolve --ipv6 ip6-localhost'; print 'Cannot run precheck resolve';}"
+perl -e "if ('%CLIENT6IP' ne '[::1]') {print 'Test requires default test client host address';} else {exec './server/resolve --ipv6 localhost6'; print 'Cannot run precheck resolve';}"
</precheck>
</client>

View File

@ -26,8 +26,8 @@ diff --git a/tests/libtest/Makefile.inc b/tests/libtest/Makefile.inc
index 080421b..ea3b806 100644
--- a/tests/libtest/Makefile.inc
+++ b/tests/libtest/Makefile.inc
@@ -521,6 +521,7 @@ lib1558_SOURCES = lib1558.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
lib1558_LDADD = $(TESTUTIL_LIBS)
@@ -531,6 +531,7 @@ lib1559_SOURCES = lib1559.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
lib1559_LDADD = $(TESTUTIL_LIBS)
lib1560_SOURCES = lib1560.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
+lib1560_CFLAGS = $(AM_CFLAGS) -fno-builtin-strcmp

View File

@ -1,11 +0,0 @@
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEJ+3q8i86vOtQ25oSXMkI/bceEsIFAlxahccACgkQXMkI/bce
EsKdrAf+OoNH+Yz1HfJG5MtmEi2sgRC56iAvZBQujPG8SJYGnT3D2nLiuC2+bzA8
eMCqisodW5f6lV/9JRvLmLS0dhxAfdf/NHlMOdtgSv+NzVGsggpHeYEZ7HucRHsQ
AKZ6/wx7rby8yZqrn2s7yWWB0qgiajWx30r+CJEYXpuw+YwZ2qZo5ecM7fa/J9ko
ESwb7BLF6KMkdSz1wSApwCdznB/BXOaPrUBMiOcwO7ftq/t1ZmqnUWLtdlSp8OoH
Tw832H1kCP2OFHcOFTQmZJLagRQtLBhC522wNsagXaMwak6uhoFApcAPqoPdm4Pm
PvTO6aAopZk+sX9VemdSQzx/4ysT3w==
=HOlc
-----END PGP SIGNATURE-----

11
curl-7.65.3.tar.xz.asc Normal file
View File

@ -0,0 +1,11 @@
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEJ+3q8i86vOtQ25oSXMkI/bceEsIFAl0xj7oACgkQXMkI/bce
EsKYbgf9G41o5x73tc+2TOGt2QmJ7ukyHmd5Vq7XTSNdNU5dJ41Z3qh9Jm72x62i
b4kJMjWyoL2j031ml5JevycpMpNa1v784UlPW2tzzL2B7v6vcA4xknJRLWlPlcTJ
HOgub6r7g/zhOpdAeJh8o4jkBLUyN+S/HOyHLWcvdWDnhqUAmpZfIqtd8kjqzDul
XAkdj7MxWqKZ3wXWwlpp4j81jpfOj7KCC/ZpxlJ0KfefgYEzV23O2hcJzw57jqTy
SQZc39uTQOjbZPlBXJD55QeVISCwe53pn55aWQll90XfE3XRapuYZdiL8wLwtl/L
tjugTKjfoy9qqOGH5YB/4kHqoSJqow==
=Itbi
-----END PGP SIGNATURE-----

View File

@ -1,34 +1,10 @@
Summary: A utility for getting files from remote servers (FTP, HTTP, and others)
Name: curl
Version: 7.64.0
Release: 8%{?dist}
Version: 7.65.3
Release: 1%{?dist}
License: MIT
Source: https://curl.haxx.se/download/%{name}-%{version}.tar.xz
# make zsh completion work again
Patch1: 0001-curl-7.64.0-zsh-completion.patch
# prevent NetworkManager from leaking file descriptors (#1680198)
Patch2: 0002-curl-7.64.0-nm-fd-leak.patch
# fix NULL dereference if flushing cookies with no CookieInfo set (#1683676)
Patch3: 0003-curl-7.64.0-cookie-segfault.patch
# avoid spurious "Could not resolve host: [host name]" error messages
Patch4: 0004-curl-7.64.0-spurious-resolver-error.patch
# remove verbose "Expire in" ... messages (#1690971)
Patch5: 0005-curl-7.64.0-expire-in-verbose-msgs.patch
# fix integer overflows in curl_url_set() (CVE-2019-5435)
Patch16: 0016-curl-7.64.0-CVE-2019-5435.patch
# fix TFTP receive buffer overflow (CVE-2019-5436)
Patch17: 0017-curl-7.64.0-CVE-2019-5436.patch
# prevent multi from crashing with many parallel transfers (#1697566, #1723242)
Patch18: 0018-curl-7.64.0-multi-sigsegv.patch
# patch making libcurl multilib ready
Patch101: 0101-curl-7.32.0-multilib.patch
@ -63,6 +39,7 @@ BuildRequires: openldap-devel
BuildRequires: openssh-clients
BuildRequires: openssh-server
BuildRequires: openssl-devel
BuildRequires: perl-interpreter
BuildRequires: pkgconfig
BuildRequires: python3-devel
BuildRequires: sed
@ -72,6 +49,12 @@ BuildRequires: zlib-devel
# needed to compress content of tool_hugehelp.c after changing curl.1 man page
BuildRequires: perl(IO::Compress::Gzip)
# needed for generation of shell completions
BuildRequires: perl(Getopt::Long)
BuildRequires: perl(Pod::Usage)
BuildRequires: perl(strict)
BuildRequires: perl(warnings)
# gnutls-serv is used by the upstream test-suite
BuildRequires: gnutls-utils
@ -87,10 +70,8 @@ BuildRequires: perl(File::Copy)
BuildRequires: perl(File::Spec)
BuildRequires: perl(IPC::Open2)
BuildRequires: perl(MIME::Base64)
BuildRequires: perl(strict)
BuildRequires: perl(Time::Local)
BuildRequires: perl(Time::HiRes)
BuildRequires: perl(warnings)
BuildRequires: perl(vars)
# The test-suite runs automatically through valgrind if valgrind is available
@ -190,11 +171,6 @@ be installed.
%setup -q
# upstream patches
%patch1 -p1
%patch2 -p1
%patch3 -p1
%patch4 -p1
%patch5 -p1
# Fedora patches
%patch101 -p1
@ -203,11 +179,6 @@ be installed.
%patch104 -p1
%patch105 -p1
# upstream patches
%patch16 -p1
%patch17 -p1
%patch18 -p1
# make tests/*.py use Python 3
sed -e '1 s|^#!/.*python|#!%{__python3}|' -i tests/*.py
@ -326,6 +297,10 @@ make DESTDIR=$RPM_BUILD_ROOT INSTALL="install -p" install
LD_LIBRARY_PATH="$RPM_BUILD_ROOT%{_libdir}:$LD_LIBRARY_PATH" \
make DESTDIR=$RPM_BUILD_ROOT INSTALL="install -p" install -C scripts
# do not install /usr/share/fish/completions/curl.fish which is also installed
# by fish-3.0.2-1.module_f31+3716+57207597 and would trigger a conflict
rm -rf ${RPM_BUILD_ROOT}%{_datadir}/fish
rm -f ${RPM_BUILD_ROOT}%{_libdir}/libcurl.la
%ldconfig_scriptlets -n libcurl
@ -333,13 +308,17 @@ rm -f ${RPM_BUILD_ROOT}%{_libdir}/libcurl.la
%ldconfig_scriptlets -n libcurl-minimal
%files
%doc CHANGES README*
%doc docs/BUGS docs/FAQ docs/FEATURES
%doc docs/MANUAL docs/RESOURCES
%doc docs/TheArtOfHttpScripting docs/TODO
%doc CHANGES
%doc README
%doc docs/BUGS
%doc docs/FAQ
%doc docs/FEATURES
%doc docs/RESOURCES
%doc docs/TODO
%doc docs/TheArtOfHttpScripting
%{_bindir}/curl
%{_mandir}/man1/curl.1*
%{_datadir}/zsh/site-functions
%{_datadir}/zsh
%files -n libcurl
%license COPYING
@ -367,6 +346,9 @@ rm -f ${RPM_BUILD_ROOT}%{_libdir}/libcurl.la
%{_libdir}/libcurl.so.4.[0-9].[0-9].minimal
%changelog
* Mon Jul 22 2019 Kamil Dudka <kdudka@redhat.com> - 7.65.3-1
- rebase to 7.65.3 to fix crashes of gnome and flatpak (#1697566)
* Mon Jul 01 2019 Kamil Dudka <kdudka@redhat.com> - 7.64.0-8
- prevent multi from crashing with many parallel transfers (#1697566, #1723242)

View File

@ -1 +1 @@
SHA512 (curl-7.64.0.tar.xz) = 953f1f5336ce5dfd1b9f933624432d401552d91ee02d39ecde6f023c956f99ec6aae8d7746d7c34b6eb2d6452f114e67da4e64d9c8dd90b7644b7844e7b9b423
SHA512 (curl-7.65.3.tar.xz) = fc4f041d3d6682378ce9eef2c6081e6ad83bb2502ea4c992c760266584c09e9ebca7c6d35958bd32a888702d9308cbce7aef69c431f97994107d7ff6b953941b