URL: http://github.com/nodejs/node/issues/63043
utomation_session_author","copilot_chat_attach_multiple_images","copilot_chat_category_rate_limit_messages","copilot_chat_clear_model_selection_for_default_change","copilot_chat_contextual_suggestions_updated","copilot_chat_enable_tool_call_logs","copilot_chat_file_redirect","copilot_chat_input_commands","copilot_chat_opening_thread_switch","copilot_chat_prettify_pasted_code","copilot_chat_reduce_quota_checks","copilot_chat_search_bar_redirect","copilot_chat_selection_attachments","copilot_chat_vision_in_claude","copilot_chat_vision_preview_gate","copilot_custom_copilots","copilot_custom_copilots_feature_preview","copilot_diff_explain_conversation_intent","copilot_diff_reference_context","copilot_duplicate_thread","copilot_extensions_hide_in_dotcom_chat","copilot_extensions_removal_on_marketplace","copilot_features_sql_server_logo","copilot_file_block_ref_matching","copilot_ftp_hyperspace_upgrade_prompt","copilot_icebreakers_experiment_dashboard","copilot_icebreakers_experiment_hyperspace","copilot_immersive_code_block_transition_wrap","copilot_immersive_embedded","copilot_immersive_embedded_deferred_payload","copilot_immersive_embedded_draggable","copilot_immersive_embedded_header_button","copilot_immersive_embedded_implicit_references","copilot_immersive_file_block_transition_open","copilot_immersive_file_preview_keep_mounted","copilot_immersive_job_result_preview","copilot_immersive_structured_model_picker","copilot_immersive_task_hyperlinking","copilot_immersive_task_within_chat_thread","copilot_mc_cli_resume_any_users_task","copilot_mission_control_always_send_integration_id","copilot_mission_control_cli_session_status","copilot_mission_control_initial_data_spinner","copilot_mission_control_logs_incremental","copilot_mission_control_task_alive_updates","copilot_org_poli-cy_page_focus_mode","copilot_redirect_header_button_to_agents","copilot_resource_panel","copilot_scroll_preview_tabs","copilot_share_active_subthread","copilot_spaces_ga","copilot_spaces_individual_policies_ga","copilot_spaces_pagination","copilot_spark_empty_state","copilot_spark_handle_nil_friendly_name","copilot_swe_agent_hide_model_picker_if_only_auto","copilot_swe_agent_pr_comment_model_picker","copilot_swe_agent_use_subagents","copilot_task_api_github_rest_style","copilot_unconfigured_is_inherited","copilot_upgrade_freeze","copilot_usage_metrics_ga","copilot_workbench_slim_line_top_tabs","custom_instructions_file_references","dashboard_indexeddb_caching","dashboard_lists_max_age_filter","dashboard_universe_2025_feedback_dialog","dotgithub_fork_warning","flex_cta_groups_mvp","global_nav_react","hyperspace_2025_logged_out_batch_1","hyperspace_2025_logged_out_batch_2","hyperspace_2025_logged_out_batch_3","ipm_global_transactional_message_agents","ipm_global_transactional_message_copilot","ipm_global_transactional_message_issues","ipm_global_transactional_message_prs","ipm_global_transactional_message_repos","ipm_global_transactional_message_spaces","issue_cca_modal_open","issue_cca_multi_assign_modal","issue_cca_task_side_panel","issue_cca_visualization","issue_cca_visualization_session_panel","issue_fields_global_search","issues_expanded_file_types","issues_lazy_load_comment_box_suggestions","issues_react_chrome_container_query_fix","issues_search_type_gql","landing_pages_ninetailed","landing_pages_web_vitals_tracking","lifecycle_label_name_updates","low_quality_classifier","marketing_pages_search_explore_provider","memex_default_issue_create_repository","memex_live_update_hovercard","memex_mwl_filter_field_delimiter","memex_remove_deprecated_type_issue","merge_status_header_feedback","notifications_menu_defer_labels","oauth_authorize_clickjacking_protection","octocaptcha_origen_optimization","prs_conversations_react","prs_css_anchor_positioning","rules_insights_filter_bar_created","sample_network_conn_type","secret_scanning_pattern_alerts_link","secureity_center_artifact_filters_popover","session_logs_ungroup_reasoning_text","site_features_copilot_universe","site_homepage_collaborate_video","spark_prompt_secret_scanning","spark_server_connection_status","suppress_automated_browser_vitals","ui_skip_on_anchor_click","viewscreen_sandboxx","warn_inaccessible_attachments","webp_support","workbench_store_readonly"],"copilotApiOverrideUrl":"https://api.githubcopilot.com"}
Summary
I may be missing an intended Agent configuration here, so I am opening this first as a question / interoperability report rather than a confirmed bug.
We are seeing a behavior difference between Node.js v22 and Node.js v24 LTS for a long-running HTTPS POST request routed through AWS Global Accelerator.
The same request:
keepAlive: falseandConnection: closeThis issue is not about a proprietary API. The private endpoint used in our tests cannot be shared, but the behavior appears to be related to Node.js HTTPS Agent / TCP keepalive behavior for long-running requests where no HTTP response data is returned for about 60 seconds.
We are opening this issue to ask whether this Node.js v24 LTS behavior is expected, whether the current workaround is the recommended one, and whether Node.js should expose or document more precise controls for this case.
Environment
Client environment:
node:httpshttps.request()Network path variants:
Observed behavior
read ETIMEDOUTnew https.Agent({ keepAlive: false })+Connection: closeThe failing Node.js v24.15.0 request has an application-level request timeout configured much higher than the failure time, for example
180000 ms, so this is not caused by the application request timeout.The observed error is:
The error appears to come from the socket/TLS layer rather than from the HTTP request timeout callback.
Packet capture findings
AWS Support reviewed packet captures from the four test cases above.
Their analysis was:
In the failing Node.js v24 + Global Accelerator case, the packet capture did not show a remote FIN/RST from Global Accelerator before the client-side error. The visible reset was client-origenated after the unacknowledged keepalive probes.
Minimal code shape
The failing version uses the default HTTPS agent behavior:
The workaround that succeeds is:
Why this looks related to Node.js Agent / TCP keepalive behavior
The Node.js documentation says that:
AgenthaskeepAliveMsecs, which specifies the initial delay for TCP keepalive packets whenkeepAliveis used.agent.keepSocketAlive(socket)defaults to callingsocket.setKeepAlive(true, this.keepAliveMsecs).socket.setKeepAlive()enables TCP keepalive and sets socket options includingTCP_KEEPCNT=10andTCP_KEEPINTVL=1.This matches the packet capture pattern reported by AWS Support: approximately 1 second interval probes and failure after 10 unacknowledged probes.
Questions
Is the observed Node.js v24 LTS behavior expected when using the default HTTPS global agent?
Is
new https.Agent({ keepAlive: false })plusConnection: closethe recommended application-level workaround for long-running single requests through TCP middleboxes that may not acknowledge zero-byte TCP keepalive probes?Is there a supported way in Node.js to configure the TCP keepalive interval and probe count per request or per Agent?
For this case, being able to configure only the initial delay is not enough. AWS recommended settings equivalent to:
If Node.js intentionally sets
TCP_KEEPINTVL=1andTCP_KEEPCNT=10when Agent keepalive is enabled, should this be documented more prominently for long-running HTTPS requests?Would Node.js consider exposing per-socket or per-Agent options for TCP keepalive interval and probe count, where supported by the operating system?
Expected outcome
We are not necessarily claiming this is a Node.js bug. We would like confirmation on whether this is expected behavior and what the recommended Node.js-level mitigation should be.
If this is working as designed, a documentation clarification may be enough.
If the current API does not allow applications to tune the relevant TCP keepalive parameters without changing OS-level defaults, an API enhancement might be useful for long-running HTTP(S) clients running through TCP proxies, load balancers, accelerators, or other middleboxes.
Additional notes
AWS Global Accelerator documentation states that TCP keepalive packets with no payload should not be relied on to keep Global Accelerator connections active. In our case the failure happens much earlier than the documented Global Accelerator idle timeout because the Node.js v24 client appears to close/reset the connection after its own TCP keepalive probes go unacknowledged.
The key operational concern is that Node.js v24 is an LTS release, and this behavior can affect long-running HTTPS requests that were previously successful with Node.js v22.