{"id":983,"hash":"91f40614d1a1fc8dad2ffab996af265b8f2687d90a16fd0a2405fc8a9b984f1e","pattern":"Python asyncio/aiohttp: ValueError: too many file descriptors in select() on Windows","full_message":"Note: Future readers be aware, this question was old, formatted and programmed in a rush. The answer given may be useful, but the question and code probably not.\n\nHello everyone,\n\nI'm having trouble understanding asyncio and aiohttp and making both work together. Because I don't understand what I'm doing I've run into a problem that I have no idea how to solve.\n\nI'm using Windows 10 64 bits.\n\nThe following code returns a list of pages that do not contain \"html\" in the Content-Type header. It's implemented using asyncio.\n\nimport asyncio\nimport aiohttp\n\nMAXitems = 30\n\nasync def getHeaders(url, session, sema):\n    async with session:\n        async with sema:\n            try:\n                async with session.head(url) as response:\n                    try:\n                        if \"html\" in response.headers[\"Content-Type\"]:\n                            return url, True\n                        else:\n                            return url, False\n                    except:\n                        return url, False\n            except:\n                return url, False\n\ndef check_urls_without_html(list_of_urls):\n    headers_without_html = set()\n    while(len(list_of_urls) != 0):\n        blockurls = []\n        print(len(list_of_urls))\n        items = 0\n        for num in range(0, len(list_of_urls)):\n            if num < MAXitems:\n                blockurls.append(list_of_urls[num - items])\n                list_of_urls.remove(list_of_urls[num - items])\n                items += 1\n        loop = asyncio.get_event_loop()\n        semaphoreHeaders = asyncio.Semaphore(50)\n        session = aiohttp.ClientSession()\n        data = loop.run_until_complete(asyncio.gather(*(getHeaders(url, session, semaphoreHeaders) for url in blockurls)))\n        for header in data:\n            if not header[1]:\n                headers_without_html.add(header)\n    return headers_without_html\n\nlist_of_urls= ['http://www.google.com', 'http://www.reddit.com']\nheaders_without_html =  check_urls_without_html(list_of_urls)\n\nfor header in headers_without_html:\n    print(header[0])\n\nWhen I run it with too many URLs (ie 2000) sometimes it returns an error like like this one:\n\ndata = loop.run_until_complete(asyncio.gather(*(getHeaders(url, session, semaphoreHeaders) for url in blockurls)))\n  File \"USER\\AppData\\Local\\Programs\\Python\\Python36-32\\lib\\asyncio\\base_events.py\", line 454, in run_until_complete\n    self.run_forever()\n  File \"USER\\AppData\\Local\\Programs\\Python\\Python36-32\\lib\\asyncio\\base_events.py\", line 421, in run_forever\n    self._run_once()\n  File \"USER\\AppData\\Local\\Programs\\Python\\Python36-32\\lib\\asyncio\\base_events.py\", line 1390, in _run_once\n    event_list = self._selector.select(timeout)\n  File \"USER\\AppData\\Local\\Programs\\Python\\Python36-32\\lib\\selectors.py\", line 323, in select\n    r, w, _ = self._select(self._readers, self._writers, [], timeout)\n  File \"USER\\AppData\\Local\\Programs\\Python\\Python36-32\\lib\\selectors.py\", line 314, in _select\n    r, w, x = select.select(r, w, w, timeout)\nValueError: too many file descriptors in select()\n\nI've read that problem arises from a Windows' restriction. I've also read there is not much that can be done about it, other than trying to use less file descriptors.\n\nI've seen people push thousands of requests with asyncio and aiohttp but even with my chuncking I can't push 30-50 without getting this error.\n\nIs there something fundamentally wrong with my code or is it an inherent problem with Windows? Can it be fixed? Can one increase the limit on the maximum number of allowed file descriptors in select?","ecosystem":"pypi","package_name":"python-3.x","package_version":null,"solution":"By default Windows can use only 64 sockets in asyncio loop. This is a limitation of underlying select() API call.\n\nTo increase the limit please use ProactorEventLoop, you can use the code below. See the full docs here here.\n\nif sys.platform == 'win32':\n    loop = asyncio.ProactorEventLoop()\n    asyncio.set_event_loop(loop)\n\nAnother solution is to limit the overall concurrency using a sempahore, see the answer provided here. For example, when doing 2000 API calls you might want not want too many parallel open requests (they might timeout / more difficult to see the individual calling times). This will give you\n\nawait gather_with_concurrency(100, *my_coroutines)","confidence":0.95,"source":"stackoverflow","source_url":"https://stackoverflow.com/questions/47675410/python-asyncio-aiohttp-valueerror-too-many-file-descriptors-in-select-on-win","votes":13,"created_at":"2026-04-19T04:52:07.355879+00:00","updated_at":"2026-04-19T04:52:07.355879+00:00"}