12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187118811891190119111921193 |
- _ _ ____ _
- ___| | | | _ \| |
- / __| | | | |_) | |
- | (__| |_| | _ <| |___
- \___|\___/|_| \_\_____|
- Things that could be nice to do in the future
- Things to do in project curl. Please tell us what you think, contribute and
- send us patches that improve things!
- Be aware that these are things that we could do, or have once been considered
- things we could do. If you want to work on any of these areas, please
- consider bringing it up for discussions first on the mailing list so that we
- all agree it is still a good idea for the project!
- All bugs documented in the KNOWN_BUGS document are subject for fixing!
- 1. libcurl
- 1.2 More data sharing
- 1.3 struct lifreq
- 1.4 signal-based resolver timeouts
- 1.5 get rid of PATH_MAX
- 1.6 Modified buffer size approach
- 1.7 Detect when called from within callbacks
- 1.8 CURLOPT_RESOLVE for any port number
- 1.9 Cache negative name resolves
- 1.11 minimize dependencies with dynamically loaded modules
- 1.12 have form functions use CURL handle argument
- 1.14 Typesafe curl_easy_setopt()
- 1.15 Monitor connections in the connection pool
- 1.16 Try to URL encode given URL
- 1.17 Add support for IRIs
- 1.18 try next proxy if one doesn't work
- 1.19 Timeout idle connections from the pool
- 1.20 SRV and URI DNS records
- 1.21 API for URL parsing/splitting
- 1.23 Offer API to flush the connection pool
- 1.24 TCP Fast Open for windows
- 2. libcurl - multi interface
- 2.1 More non-blocking
- 2.2 Better support for same name resolves
- 2.3 Non-blocking curl_multi_remove_handle()
- 2.4 Split connect and authentication process
- 2.5 Edge-triggered sockets should work
- 3. Documentation
- 3.2 Provide cmake config-file
- 4. FTP
- 4.1 HOST
- 4.2 Alter passive/active on failure and retry
- 4.3 Earlier bad letter detection
- 4.4 REST for large files
- 4.5 ASCII support
- 4.6 GSSAPI via Windows SSPI
- 4.7 STAT for LIST without data connection
- 5. HTTP
- 5.1 Better persistency for HTTP 1.0
- 5.2 support FF3 sqlite cookie files
- 5.3 Rearrange request header order
- 5.4 HTTP Digest using SHA-256
- 5.5 auth= in URLs
- 5.6 Refuse "downgrade" redirects
- 5.7 Brotli compression
- 5.8 QUIC
- 5.9 Improve formpost API
- 5.10 Leave secure cookies alone
- 5.11 Chunked transfer multipart formpost
- 6. TELNET
- 6.1 ditch stdin
- 6.2 ditch telnet-specific select
- 6.3 feature negotiation debug data
- 7. SMTP
- 7.1 Pipelining
- 7.2 Enhanced capability support
- 7.3 Add CURLOPT_MAIL_CLIENT option
- 8. POP3
- 8.1 Pipelining
- 8.2 Enhanced capability support
- 9. IMAP
- 9.1 Enhanced capability support
- 10. LDAP
- 10.1 SASL based authentication mechanisms
- 11. SMB
- 11.1 File listing support
- 11.2 Honor file timestamps
- 11.3 Use NTLMv2
- 11.4 Create remote directories
- 12. New protocols
- 12.1 RSYNC
- 13. SSL
- 13.1 Disable specific versions
- 13.2 Provide mutex locking API
- 13.3 Evaluate SSL patches
- 13.4 Cache/share OpenSSL contexts
- 13.5 Export session ids
- 13.6 Provide callback for cert verification
- 13.7 improve configure --with-ssl
- 13.8 Support DANE
- 13.10 Support SSLKEYLOGFILE
- 13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
- 13.12 Support HSTS
- 13.13 Support HPKP
- 14. GnuTLS
- 14.1 SSL engine stuff
- 14.2 check connection
- 15. WinSSL/SChannel
- 15.1 Add support for client certificate authentication
- 15.2 Add support for custom server certificate validation
- 15.3 Add support for the --ciphers option
- 16. SASL
- 16.1 Other authentication mechanisms
- 16.2 Add QOP support to GSSAPI authentication
- 16.3 Support binary messages (i.e.: non-base64)
- 17. SSH protocols
- 17.1 Multiplexing
- 17.2 SFTP performance
- 17.3 Support better than MD5 hostkey hash
- 17.4 Support CURLOPT_PREQUOTE
- 18. Command line tool
- 18.1 sync
- 18.2 glob posts
- 18.3 prevent file overwriting
- 18.4 simultaneous parallel transfers
- 18.5 provide formpost headers
- 18.6 warning when setting an option
- 18.8 offer color-coded HTTP header output
- 18.9 Choose the name of file in braces for complex URLs
- 18.10 improve how curl works in a windows console window
- 18.11 -w output to stderr
- 18.12 keep running, read instructions from pipe/socket
- 18.13 support metalink in http headers
- 18.14 --fail without --location should treat 3xx as a failure
- 18.15 --retry should resume
- 18.16 send only part of --data
- 18.17 consider file name from the redirected URL with -O ?
- 19. Build
- 19.1 roffit
- 19.2 Enable PIE and RELRO by default
- 20. Test suite
- 20.1 SSL tunnel
- 20.2 nicer lacking perl message
- 20.3 more protocols supported
- 20.4 more platforms supported
- 20.5 Add support for concurrent connections
- 20.6 Use the RFC6265 test suite
- 21. Next SONAME bump
- 21.1 http-style HEAD output for FTP
- 21.2 combine error codes
- 21.3 extend CURLOPT_SOCKOPTFUNCTION prototype
- 22. Next major release
- 22.1 cleanup return codes
- 22.2 remove obsolete defines
- 22.3 size_t
- 22.4 remove several functions
- 22.5 remove CURLOPT_FAILONERROR
- 22.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
- 22.7 remove progress meter from libcurl
- 22.8 remove 'curl_httppost' from public
- ==============================================================================
- 1. libcurl
- 1.2 More data sharing
- curl_share_* functions already exist and work, and they can be extended to
- share more. For example, enable sharing of the ares channel and the
- connection cache.
- 1.3 struct lifreq
- Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
- SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
- To support IPv6 interface addresses for network interfaces properly.
- 1.4 signal-based resolver timeouts
- libcurl built without an asynchronous resolver library uses alarm() to time
- out DNS lookups. When a timeout occurs, this causes libcurl to jump from the
- signal handler back into the library with a sigsetjmp, which effectively
- causes libcurl to continue running within the signal handler. This is
- non-portable and could cause problems on some platforms. A discussion on the
- problem is available at https://curl.haxx.se/mail/lib-2008-09/0197.html
- Also, alarm() provides timeout resolution only to the nearest second. alarm
- ought to be replaced by setitimer on systems that support it.
- 1.5 get rid of PATH_MAX
- Having code use and rely on PATH_MAX is not nice:
- http://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
- Currently the SSH based code uses it a bit, but to remove PATH_MAX from there
- we need libssh2 to properly tell us when we pass in a too small buffer and
- its current API (as of libssh2 1.2.7) doesn't.
- 1.6 Modified buffer size approach
- Current libcurl allocates a fixed 16K size buffer for download and an
- additional 16K for upload. They are always unconditionally part of the easy
- handle. If CRLF translations are requested, an additional 32K "scratch
- buffer" is allocated. A total of 64K transfer buffers in the worst case.
- First, while the handles are not actually in use these buffers could be freed
- so that lingering handles just kept in queues or whatever waste less memory.
- Secondly, SFTP is a protocol that needs to handle many ~30K blocks at once
- since each need to be individually acked and therefore libssh2 must be
- allowed to send (or receive) many separate ones in parallel to achieve high
- transfer speeds. A current libcurl build with a 16K buffer makes that
- impossible, but one with a 512K buffer will reach MUCH faster transfers. But
- allocating 512K unconditionally for all buffers just in case they would like
- to do fast SFTP transfers at some point is not a good solution either.
- Dynamically allocate buffer size depending on protocol in use in combination
- with freeing it after each individual transfer? Other suggestions?
- 1.7 Detect when called from within callbacks
- We should set a state variable before calling callbacks, so that we
- subsequently can add code within libcurl that returns error if called within
- callbacks for when that's not supported.
- 1.8 CURLOPT_RESOLVE for any port number
- This option allows applications to set a replacement IP address for a given
- host + port pair. Consider making support for providing a replacement address
- for the host name on all port numbers.
- See https://github.com/curl/curl/issues/1264
- 1.9 Cache negative name resolves
- A name resolve that has failed is likely to fail when made again within a
- short period of time. Currently we only cache positive responses.
- 1.11 minimize dependencies with dynamically loaded modules
- We can create a system with loadable modules/plug-ins, where these modules
- would be the ones that link to 3rd party libs. That would allow us to avoid
- having to load ALL dependencies since only the necessary ones for this
- app/invoke/used protocols would be necessary to load. See
- https://github.com/curl/curl/issues/349
- 1.12 have form functions use CURL handle argument
- curl_formadd() and curl_formget() both currently have no CURL handle
- argument, but both can use a callback that is set in the easy handle, and
- thus curl_formget() with callback cannot function without first having
- curl_easy_perform() (or similar) called - which is hard to grasp and a design
- mistake.
- The curl_formadd() design can probably also be reconsidered to make it easier
- to use and less error-prone. Probably easiest by splitting it into several
- function calls.
- 1.14 Typesafe curl_easy_setopt()
- One of the most common problems in libcurl using applications is the lack of
- type checks for curl_easy_setopt() which happens because it accepts varargs
- and thus can take any type.
- One possible solution to this is to introduce a few different versions of the
- setopt version for the different kinds of data you can set.
- curl_easy_set_num() - sets a long value
- curl_easy_set_large() - sets a curl_off_t value
- curl_easy_set_ptr() - sets a pointer
- curl_easy_set_cb() - sets a callback PLUS its callback data
- 1.15 Monitor connections in the connection pool
- libcurl's connection cache or pool holds a number of open connections for the
- purpose of possible subsequent connection reuse. It may contain a few up to a
- significant amount of connections. Currently, libcurl leaves all connections
- as they are and first when a connection is iterated over for matching or
- reuse purpose it is verified that it is still alive.
- Those connections may get closed by the server side for idleness or they may
- get a HTTP/2 ping from the peer to verify that they're still alive. By adding
- monitoring of the connections while in the pool, libcurl can detect dead
- connections (and close them) better and earlier, and it can handle HTTP/2
- pings to keep such ones alive even when not actively doing transfers on them.
- 1.16 Try to URL encode given URL
- Given a URL that for example contains spaces, libcurl could have an option
- that would try somewhat harder than it does now and convert spaces to %20 and
- perhaps URL encoded byte values over 128 etc (basically do what the redirect
- following code already does).
- https://github.com/curl/curl/issues/514
- 1.17 Add support for IRIs
- IRIs (RFC 3987) allow localized, non-ascii, names in the URL. To properly
- support this, curl/libcurl would need to translate/encode the given input
- from the input string encoding into percent encoded output "over the wire".
- To make that work smoothly for curl users even on Windows, curl would
- probably need to be able to convert from several input encodings.
- 1.18 try next proxy if one doesn't work
- Allow an application to specify a list of proxies to try, and failing to
- connect to the first go on and try the next instead until the list is
- exhausted. Browsers support this feature at least when they specify proxies
- using PACs.
- https://github.com/curl/curl/issues/896
- 1.19 Timeout idle connections from the pool
- libcurl currently keeps connections in its connection pool for an indefinite
- period of time, until it either gets reused, gets noticed that it has been
- closed by the server or gets pruned to make room for a new connection.
- To reduce overhead (especially for when we add monitoring of the connections
- in the pool), we should introduce a timeout so that connections that have
- been idle for N seconds get closed.
- 1.20 SRV and URI DNS records
- Offer support for resolving SRV and URI DNS records for libcurl to know which
- server to connect to for various protocols (including HTTP!).
- 1.21 API for URL parsing/splitting
- libcurl has always parsed URLs internally and never exposed any API or
- features to allow applications to do it. Still most or many applications
- using libcurl need that ability. In polls to users, we've learned that many
- libcurl users would like to see and use such an API.
- 1.23 Offer API to flush the connection pool
- Sometimes applications want to flush all the existing connections kept alive.
- An API could allow a forced flush or just a forced loop that would properly
- close all connections that have been closed by the server already.
- 1.24 TCP Fast Open for windows
- libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and
- Mac OS. Windows supports TCP Fast Open starting with Windows 10, version 1607
- and we should add support for it.
- 2. libcurl - multi interface
- 2.1 More non-blocking
- Make sure we don't ever loop because of non-blocking sockets returning
- EWOULDBLOCK or similar. Blocking cases include:
- - Name resolves on non-windows unless c-ares or the threaded resolver is used
- - HTTP proxy CONNECT operations
- - SOCKS proxy handshakes
- - file:// transfers
- - TELNET transfers
- - The "DONE" operation (post transfer protocol-specific actions) for the
- protocols SFTP, SMTP, FTP. Fixing Curl_done() for this is a worthy task.
- 2.2 Better support for same name resolves
- If a name resolve has been initiated for name NN and a second easy handle
- wants to resolve that name as well, make it wait for the first resolve to end
- up in the cache instead of doing a second separate resolve. This is
- especially needed when adding many simultaneous handles using the same host
- name when the DNS resolver can get flooded.
- 2.3 Non-blocking curl_multi_remove_handle()
- The multi interface has a few API calls that assume a blocking behavior, like
- add_handle() and remove_handle() which limits what we can do internally. The
- multi API need to be moved even more into a single function that "drives"
- everything in a non-blocking manner and signals when something is done. A
- remove or add would then only ask for the action to get started and then
- multi_perform() etc still be called until the add/remove is completed.
- 2.4 Split connect and authentication process
- The multi interface treats the authentication process as part of the connect
- phase. As such any failures during authentication won't trigger the relevant
- QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
- 2.5 Edge-triggered sockets should work
- The multi_socket API should work with edge-triggered socket events. One of
- the internal actions that need to be improved for this to work perfectly is
- the 'maxloops' handling in transfer.c:readwrite_data().
- 3. Documentation
- 3.2 Provide cmake config-file
- A config-file package is a set of files provided by us to allow applications
- to write cmake scripts to find and use libcurl easier. See
- https://github.com/curl/curl/issues/885
- 4. FTP
- 4.1 HOST
- HOST is a command for a client to tell which host name to use, to offer FTP
- servers named-based virtual hosting:
- https://tools.ietf.org/html/rfc7151
- 4.2 Alter passive/active on failure and retry
- When trying to connect passively to a server which only supports active
- connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
- connection. There could be a way to fallback to an active connection (and
- vice versa). https://curl.haxx.se/bug/feature.cgi?id=1754793
- 4.3 Earlier bad letter detection
- Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the
- process to avoid doing a resolve and connect in vain.
- 4.4 REST for large files
- REST fix for servers not behaving well on >2GB requests. This should fail if
- the server doesn't set the pointer to the requested index. The tricky
- (impossible?) part is to figure out if the server did the right thing or not.
- 4.5 ASCII support
- FTP ASCII transfers do not follow RFC959. They don't convert the data
- accordingly.
- 4.6 GSSAPI via Windows SSPI
- In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5)
- via third-party GSS-API libraries, such as Heimdal or MIT Kerberos, also add
- support for GSSAPI authentication via Windows SSPI.
- 4.7 STAT for LIST without data connection
- Some FTP servers allow STAT for listing directories instead of using LIST, and
- the response is then sent over the control connection instead of as the
- otherwise usedw data connection: http://www.nsftools.com/tips/RawFTP.htm#STAT
- This is not detailed in any FTP specification.
- 5. HTTP
- 5.1 Better persistency for HTTP 1.0
- "Better" support for persistent connections over HTTP 1.0
- https://curl.haxx.se/bug/feature.cgi?id=1089001
- 5.2 support FF3 sqlite cookie files
- Firefox 3 is changing from its former format to a a sqlite database instead.
- We should consider how (lib)curl can/should support this.
- https://curl.haxx.se/bug/feature.cgi?id=1871388
- 5.3 Rearrange request header order
- Server implementors often make an effort to detect browser and to reject
- clients it can detect to not match. One of the last details we cannot yet
- control in libcurl's HTTP requests, which also can be exploited to detect
- that libcurl is in fact used even when it tries to impersonate a browser, is
- the order of the request headers. I propose that we introduce a new option in
- which you give headers a value, and then when the HTTP request is built it
- sorts the headers based on that number. We could then have internally created
- headers use a default value so only headers that need to be moved have to be
- specified.
- 5.4 HTTP Digest using SHA-256
- RFC 7616 introduces an update to the HTTP Digest authentication
- specification, which amongst other thing defines how new digest algorithms
- can be used instead of MD5 which is considered old and not recommended.
- See https://tools.ietf.org/html/rfc7616 and
- https://github.com/curl/curl/issues/1018
- 5.5 auth= in URLs
- Add the ability to specify the preferred authentication mechanism to use by
- using ;auth=<mech> in the login part of the URL.
- For example:
- http://test:pass;auth=NTLM@example.com would be equivalent to specifying --user
- test:pass;auth=NTLM or --user test:pass --ntlm from the command line.
- Additionally this should be implemented for proxy base URLs as well.
- 5.6 Refuse "downgrade" redirects
- See https://github.com/curl/curl/issues/226
- Consider a way to tell curl to refuse to "downgrade" protocol with a redirect
- and/or possibly a bit that refuses redirect to change protocol completely.
- 5.7 Brotli compression
- Brotli compression performs better than gzip and is being implemented by
- browsers and servers widely. The algorithm: https://github.com/google/brotli
- The Firefox bug: https://bugzilla.mozilla.org/show_bug.cgi?id=366559
- 5.8 QUIC
- The standardization process of QUIC has been taken to the IETF and can be
- followed on the [IETF QUIC Mailing
- list](https://www.ietf.org/mailman/listinfo/quic). I'd like us to get on the
- bandwagon. Ideally, this would be done with a separate library/project to
- handle the binary/framing layer in a similar fashion to how HTTP/2 is
- implemented. This, to allow other projects to benefit from the work and to
- thus broaden the interest and chance of others to participate.
- 5.9 Improve formpost API
- Revamp the formpost API and making something that is easier to use and
- understand:
- https://github.com/curl/curl/wiki/formpost-API-redesigned
- 5.10 Leave secure cookies alone
- Non-secure origins (HTTP sites) should not be allowed to set or modify
- cookies with the 'secure' property:
- https://tools.ietf.org/html/draft-ietf-httpbis-cookie-alone-01
- 5.11 Chunked transfer multipart formpost
- For a case where the file is being made during the upload is progressing
- (like passed on stdin to the curl tool), we cannot know the size before-hand
- and we rather not read the entire thing into memory before it can start the
- upload.
- https://github.com/curl/curl/issues/1139
- 6. TELNET
- 6.1 ditch stdin
- Reading input (to send to the remote server) on stdin is a crappy solution for
- library purposes. We need to invent a good way for the application to be able
- to provide the data to send.
- 6.2 ditch telnet-specific select
- Move the telnet support's network select() loop go away and merge the code
- into the main transfer loop. Until this is done, the multi interface won't
- work for telnet.
- 6.3 feature negotiation debug data
- Add telnet feature negotiation data to the debug callback as header data.
- 7. SMTP
- 7.1 Pipelining
- Add support for pipelining emails.
- 7.2 Enhanced capability support
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the EHLO command.
- 7.3 Add CURLOPT_MAIL_CLIENT option
- Rather than use the URL to specify the mail client string to present in the
- HELO and EHLO commands, libcurl should support a new CURLOPT specifically for
- specifying this data as the URL is non-standard and to be honest a bit of a
- hack ;-)
- Please see the following thread for more information:
- https://curl.haxx.se/mail/lib-2012-05/0178.html
- 8. POP3
- 8.1 Pipelining
- Add support for pipelining commands.
- 8.2 Enhanced capability support
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the CAPA command.
- 9. IMAP
- 9.1 Enhanced capability support
- Add the ability, for an application that uses libcurl, to obtain the list of
- capabilities returned from the CAPABILITY command.
- 10. LDAP
- 10.1 SASL based authentication mechanisms
- Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
- to an LDAP server. However, this function sends username and password details
- using the simple authentication mechanism (as clear text). However, it should
- be possible to use ldap_bind_s() instead specifying the security context
- information ourselves.
- 11. SMB
- 11.1 File listing support
- Add support for listing the contents of a SMB share. The output should probably
- be the same as/similar to FTP.
- 11.2 Honor file timestamps
- The timestamp of the transferred file should reflect that of the original file.
- 11.3 Use NTLMv2
- Currently the SMB authentication uses NTLMv1.
- 11.4 Create remote directories
- Support for creating remote directories when uploading a file to a directory
- that doesn't exist on the server, just like --ftp-create-dirs.
- 12. New protocols
- 12.1 RSYNC
- There's no RFC for the protocol or an URI/URL format. An implementation
- should most probably use an existing rsync library, such as librsync.
- 13. SSL
- 13.1 Disable specific versions
- Provide an option that allows for disabling specific SSL versions, such as
- SSLv2 https://curl.haxx.se/bug/feature.cgi?id=1767276
- 13.2 Provide mutex locking API
- Provide a libcurl API for setting mutex callbacks in the underlying SSL
- library, so that the same application code can use mutex-locking
- independently of OpenSSL or GnutTLS being used.
- 13.3 Evaluate SSL patches
- Evaluate/apply Gertjan van Wingerde's SSL patches:
- https://curl.haxx.se/mail/lib-2004-03/0087.html
- 13.4 Cache/share OpenSSL contexts
- "Look at SSL cafile - quick traces look to me like these are done on every
- request as well, when they should only be necessary once per SSL context (or
- once per handle)". The major improvement we can rather easily do is to make
- sure we don't create and kill a new SSL "context" for every request, but
- instead make one for every connection and re-use that SSL context in the same
- style connections are re-used. It will make us use slightly more memory but
- it will libcurl do less creations and deletions of SSL contexts.
- Technically, the "caching" is probably best implemented by getting added to
- the share interface so that easy handles who want to and can reuse the
- context specify that by sharing with the right properties set.
- https://github.com/curl/curl/issues/1110
- 13.5 Export session ids
- Add an interface to libcurl that enables "session IDs" to get
- exported/imported. Cris Bailiff said: "OpenSSL has functions which can
- serialise the current SSL state to a buffer of your choice, and recover/reset
- the state from such a buffer at a later date - this is used by mod_ssl for
- apache to implement and SSL session ID cache".
- 13.6 Provide callback for cert verification
- OpenSSL supports a callback for customised verification of the peer
- certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
- it be? There's so much that could be done if it were!
- 13.7 improve configure --with-ssl
- make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
- then NSS...
- 13.8 Support DANE
- DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL
- keys and certs over DNS using DNSSEC as an alternative to the CA model.
- https://www.rfc-editor.org/rfc/rfc6698.txt
- An initial patch was posted by Suresh Krishnaswamy on March 7th 2013
- (https://curl.haxx.se/mail/lib-2013-03/0075.html) but it was a too simple
- approach. See Daniel's comments:
- https://curl.haxx.se/mail/lib-2013-03/0103.html . libunbound may be the
- correct library to base this development on.
- Björn Stenberg wrote a separate initial take on DANE that was never
- completed.
- 13.10 Support SSLKEYLOGFILE
- When used, Firefox and Chrome dumps their master TLS keys to the file name
- this environment variable specifies. This allows tools like for example
- Wireshark to capture and decipher TLS traffic to/from those clients. libcurl
- could be made to support this more widely (presumably this already works when
- built with NSS). Peter Wu made a OpenSSL preload to make possible that can be
- used as inspiration and guidance
- https://git.lekensteyn.nl/peter/wireshark-notes/tree/src/sslkeylog.c
- 13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
- CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root
- certificates when comparing the pinned keys. Therefore it is not compatible
- with "HTTP Public Key Pinning" as there also intermediate and root certificates
- can be pinned. This is very useful as it prevents webadmins from "locking
- themself out of their servers".
- Adding this feature would make curls pinning 100% compatible to HPKP and allow
- more flexible pinning.
- 13.12 Support HSTS
- "HTTP Strict Transport Security" is TOFU (trust on first use), time-based
- features indicated by a HTTP header send by the webserver. It is widely used
- in browsers and it's purpose is to prevent insecure HTTP connections after
- a previous HTTPS connection. It protects against SSLStripping attacks.
- Doc: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
- RFC 6797: https://tools.ietf.org/html/rfc6797
- 13.13 Support HPKP
- "HTTP Public Key Pinning" is TOFU (trust on first use), time-based
- features indicated by a HTTP header send by the webserver. It's purpose is
- to prevent Man-in-the-middle attacks by trusted CAs by allowing webadmins
- to specify which CAs/certificates/public keys to trust when connection to
- their websites.
- It can be build based on PINNEDPUBLICKEY.
- Wikipedia: https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning
- OWASP: https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning
- Doc: https://developer.mozilla.org/de/docs/Web/Security/Public_Key_Pinning
- RFC: https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21
- 14. GnuTLS
- 14.1 SSL engine stuff
- Is this even possible?
- 14.2 check connection
- Add a way to check if the connection seems to be alive, to correspond to the
- SSL_peak() way we use with OpenSSL.
- 15. WinSSL/SChannel
- 15.1 Add support for client certificate authentication
- WinSSL/SChannel currently makes use of the OS-level system and user
- certificate and private key stores. This does not allow the application
- or the user to supply a custom client certificate using curl or libcurl.
- Therefore support for the existing -E/--cert and --key options should be
- implemented by supplying a custom certificate to the SChannel APIs, see:
- - Getting a Certificate for Schannel
- https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx
- 15.2 Add support for custom server certificate validation
- WinSSL/SChannel currently makes use of the OS-level system and user
- certificate trust store. This does not allow the application or user to
- customize the server certificate validation process using curl or libcurl.
- Therefore support for the existing --cacert or --capath options should be
- implemented by supplying a custom certificate to the SChannel APIs, see:
- - Getting a Certificate for Schannel
- https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx
- 15.3 Add support for the --ciphers option
- The cipher suites used by WinSSL/SChannel are configured on an OS-level
- instead of an application-level. This does not allow the application or
- the user to customize the configured cipher suites using curl or libcurl.
- Therefore support for the existing --ciphers option should be implemented
- by mapping the OpenSSL/GnuTLS cipher suites to the SChannel APIs, see
- - Specifying Schannel Ciphers and Cipher Strengths
- https://msdn.microsoft.com/en-us/library/windows/desktop/aa380161.aspx
- 16. SASL
- 16.1 Other authentication mechanisms
- Add support for other authentication mechanisms such as OLP,
- GSS-SPNEGO and others.
- 16.2 Add QOP support to GSSAPI authentication
- Currently the GSSAPI authentication only supports the default QOP of auth
- (Authentication), whilst Kerberos V5 supports both auth-int (Authentication
- with integrity protection) and auth-conf (Authentication with integrity and
- privacy protection).
- 16.3 Support binary messages (i.e.: non-base64)
- Mandatory to support LDAP SASL authentication.
- 17. SSH protocols
- 17.1 Multiplexing
- SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
- multiple parallel transfers from the same host using the same connection,
- much in the same spirit as HTTP/2 does. libcurl however does not take
- advantage of that ability but will instead always create a new connection for
- new transfers even if an existing connection already exists to the host.
- To fix this, libcurl would have to detect an existing connection and "attach"
- the new transfer to the existing one.
- 17.2 SFTP performance
- libcurl's SFTP transfer performance is sub par and can be improved, mostly by
- the approach mentioned in "1.6 Modified buffer size approach".
- 17.3 Support better than MD5 hostkey hash
- libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the
- server's key. MD5 is generally being deprecated so we should implement
- support for stronger hashing algorithms. libssh2 itself is what provides this
- underlying functionality and it supports at least SHA-1 as an alternative.
- SHA-1 is also being deprecated these days so we should consider workign with
- libssh2 to instead offer support for SHA-256 or similar.
- 17.4 Support CURLOPT_PREQUOTE
- The two other QUOTE options are supported for SFTP, but this was left out for
- unknown reasons!
- 18. Command line tool
- 18.1 sync
- "curl --sync http://example.com/feed[1-100].rss" or
- "curl --sync http://example.net/{index,calendar,history}.html"
- Downloads a range or set of URLs using the remote name, but only if the
- remote file is newer than the local file. A Last-Modified HTTP date header
- should also be used to set the mod date on the downloaded file.
- 18.2 glob posts
- Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
- This is easily scripted though.
- 18.3 prevent file overwriting
- Add an option that prevents curl from overwriting existing local files. When
- used, and there already is an existing file with the target file name
- (either -O or -o), a number should be appended (and increased if already
- existing). So that index.html becomes first index.html.1 and then
- index.html.2 etc.
- 18.4 simultaneous parallel transfers
- The client could be told to use maximum N simultaneous parallel transfers and
- then just make sure that happens. It should of course not make more than one
- connection to the same remote host. This would require the client to use the
- multi interface. https://curl.haxx.se/bug/feature.cgi?id=1558595
- Using the multi interface would also allow properly using parallel transfers
- with HTTP/2 and supporting HTTP/2 server push from the command line.
- 18.5 provide formpost headers
- Extending the capabilities of the multipart formposting. How about leaving
- the ';type=foo' syntax as it is and adding an extra tag (headers) which
- works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
- fil1.hdr contains extra headers like
- Content-Type: text/plain; charset=KOI8-R"
- Content-Transfer-Encoding: base64
- X-User-Comment: Please don't use browser specific HTML code
- which should overwrite the program reasonable defaults (plain/text,
- 8bit...)
- 18.6 warning when setting an option
- Display a warning when libcurl returns an error when setting an option.
- This can be useful to tell when support for a particular feature hasn't been
- compiled into the library.
- 18.8 offer color-coded HTTP header output
- By offering different color output on the header name and the header
- contents, they could be made more readable and thus help users working on
- HTTP services.
- 18.9 Choose the name of file in braces for complex URLs
- When using braces to download a list of URLs and you use complicated names
- in the list of alternatives, it could be handy to allow curl to use other
- names when saving.
- Consider a way to offer that. Possibly like
- {partURL1:name1,partURL2:name2,partURL3:name3} where the name following the
- colon is the output name.
- See https://github.com/curl/curl/issues/221
- 18.10 improve how curl works in a windows console window
- If you pull the scrollbar when transferring with curl in a Windows console
- window, the transfer is interrupted and can get disconnected. This can
- probably be improved. See https://github.com/curl/curl/issues/322
- 18.11 -w output to stderr
- -w is quite useful, but not to those of us who use curl without -o or -O
- (such as for scripting through a higher level language). It would be nice to
- have an option that is exactly like -w but sends it to stderr
- instead. Proposed name: --write-stderr. See
- https://github.com/curl/curl/issues/613
- 18.12 keep running, read instructions from pipe/socket
- Provide an option that makes curl not exit after the last URL (or even work
- without a given URL), and then make it read instructions passed on a pipe or
- over a socket to make further instructions so that a second subsequent curl
- invoke can talk to the still running instance and ask for transfers to get
- done, and thus maintain its connection pool, DNS cache and more.
- 18.13 support metalink in http headers
- Curl has support for downloading a metalink xml file, processing it, and then
- downloading the target of the metalink. This is done via the --metalink option.
- It would be nice if metalink also supported downloading via metalink
- information that is stored in HTTP headers (RFC 6249). Theoretically this could
- also be supported with the --metalink option.
- See https://tools.ietf.org/html/rfc6249
- See also https://lists.gnu.org/archive/html/bug-wget/2015-06/msg00034.html for
- an implematation of this in wget.
- 18.14 --fail without --location should treat 3xx as a failure
- To allow a command line like this to detect a redirect and consider it a
- failure:
- curl -v --fail -O https://example.com/curl-7.48.0.tar.gz
- ... --fail must treat 3xx responses as failures too. The least problematic
- way to implement this is probably to add that new logic in the command line
- tool only and not in the underlying CURLOPT_FAILONERROR logic.
- 18.15 --retry should resume
- When --retry is used and curl actually retries transfer, it should use the
- already transferred data and do a resumed transfer for the rest (when
- possible) so that it doesn't have to transfer the same data again that was
- already transferred before the retry.
- See https://github.com/curl/curl/issues/1084
- 18.16 send only part of --data
- When the user only wants to send a small piece of the data provided with
- --data or --data-binary, like when that data is a huge file, consider a way
- to specify that curl should only send a piece of that. One suggested syntax
- would be: "--data-binary @largefile.zip!1073741823-2147483647".
- See https://github.com/curl/curl/issues/1200
- 18.17 consider file name from the redirected URL with -O ?
- When a user gives a URL and uses -O, and curl follows a redirect to a new
- URL, the file name is not extracted and used from the newly redirected-to URL
- even if the new URL may have a much more sensible file name.
- This is clearly documented and helps for security since there's no surprise
- to users which file name that might get overwritten. But maybe a new option
- could allow for this or maybe -J should imply such a treatment as well as -J
- already allows for the server to decide what file name to use so it already
- provides the "may overwrite any file" risk.
- This is extra tricky if the original URL has no file name part at all since
- then the current code path will error out with an error message, and we can't
- *know* already at that point if curl will be redirected to a URL that has a
- file name...
- See https://github.com/curl/curl/issues/1241
- 19. Build
- 19.1 roffit
- Consider extending 'roffit' to produce decent ASCII output, and use that
- instead of (g)nroff when building src/tool_hugehelp.c
- 19.2 Enable PIE and RELRO by default
- Especially when having programs that execute curl via the command line, PIE
- renders the exploitation of memory corruption vulnerabilities a lot more
- difficult. This can be attributed to the additional information leaks being
- required to conduct a successful attack. RELRO, on the other hand, masks
- different binary sections like the GOT as read-only and thus kills a handful
- of techniques that come in handy when attackers are able to arbitrarily
- overwrite memory. A few tests showed that enabling these features had close
- to no impact, neither on the performance nor on the general functionality of
- curl.
- 20. Test suite
- 20.1 SSL tunnel
- Make our own version of stunnel for simple port forwarding to enable HTTPS
- and FTP-SSL tests without the stunnel dependency, and it could allow us to
- provide test tools built with either OpenSSL or GnuTLS
- 20.2 nicer lacking perl message
- If perl wasn't found by the configure script, don't attempt to run the tests
- but explain something nice why it doesn't.
- 20.3 more protocols supported
- Extend the test suite to include more protocols. The telnet could just do FTP
- or http operations (for which we have test servers).
- 20.4 more platforms supported
- Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
- fork()s and it should become even more portable.
- 20.5 Add support for concurrent connections
- Tests 836, 882 and 938 were designed to verify that separate connections aren't
- used when using different login credentials in protocols that shouldn't re-use
- a connection under such circumstances.
- Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent
- connections. The read while() loop seems to loop until it receives a disconnect
- from the client, where it then enters the waiting for connections loop. When
- the client opens a second connection to the server, the first connection hasn't
- been dropped (unless it has been forced - which we shouldn't do in these tests)
- and thus the wait for connections loop is never entered to receive the second
- connection.
- 20.6 Use the RFC6265 test suite
- A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at
- https://github.com/abarth/http-state/tree/master/tests
- It'd be really awesome if someone would write a script/setup that would run
- curl with that test suite and detect deviances. Ideally, that would even be
- incorporated into our regular test suite.
- 21. Next SONAME bump
- 21.1 http-style HEAD output for FTP
- #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
- from being output in NOBODY requests over FTP
- 21.2 combine error codes
- Combine some of the error codes to remove duplicates. The original
- numbering should not be changed, and the old identifiers would be
- macroed to the new ones in an CURL_NO_OLDIES section to help with
- backward compatibility.
- Candidates for removal and their replacements:
- CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
- CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
- CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
- CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
- CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
- CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
- CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
- CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
- 21.3 extend CURLOPT_SOCKOPTFUNCTION prototype
- The current prototype only provides 'purpose' that tells what the
- connection/socket is for, but not any protocol or similar. It makes it hard
- for applications to differentiate on TCP vs UDP and even HTTP vs FTP and
- similar.
- 22. Next major release
- 22.1 cleanup return codes
- curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
- CURLMcode. These should be changed to be the same.
- 22.2 remove obsolete defines
- remove obsolete defines from curl/curl.h
- 22.3 size_t
- make several functions use size_t instead of int in their APIs
- 22.4 remove several functions
- remove the following functions from the public API:
- curl_getenv
- curl_mprintf (and variations)
- curl_strequal
- curl_strnequal
- They will instead become curlx_ - alternatives. That makes the curl app
- still capable of using them, by building with them from source.
- These functions have no purpose anymore:
- curl_multi_socket
- curl_multi_socket_all
- 22.5 remove CURLOPT_FAILONERROR
- Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
- internally. Let the app judge success or not for itself.
- 22.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
- Remove support for a global DNS cache. Anything global is silly, and we
- already offer the share interface for the same functionality but done
- "right".
- 22.7 remove progress meter from libcurl
- The internally provided progress meter output doesn't belong in the library.
- Basically no application wants it (apart from curl) but instead applications
- can and should do their own progress meters using the progress callback.
- The progress callback should then be bumped as well to get proper 64bit
- variable types passed to it instead of doubles so that big files work
- correctly.
- 22.8 remove 'curl_httppost' from public
- curl_formadd() was made to fill in a public struct, but the fact that the
- struct is public is never really used by application for their own advantage
- but instead often restricts how the form functions can or can't be modified.
- Changing them to return a private handle will benefit the implementation and
- allow us much greater freedoms while still maintaining a solid API and ABI.
|