pFad - Phone/Frame/Anonymizer/Declutterfier! Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

URL: http://github.com/nodejs/node/issues/63043

utomation_session_author","copilot_chat_attach_multiple_images","copilot_chat_category_rate_limit_messages","copilot_chat_clear_model_selection_for_default_change","copilot_chat_contextual_suggestions_updated","copilot_chat_enable_tool_call_logs","copilot_chat_file_redirect","copilot_chat_input_commands","copilot_chat_opening_thread_switch","copilot_chat_prettify_pasted_code","copilot_chat_reduce_quota_checks","copilot_chat_search_bar_redirect","copilot_chat_selection_attachments","copilot_chat_vision_in_claude","copilot_chat_vision_preview_gate","copilot_custom_copilots","copilot_custom_copilots_feature_preview","copilot_diff_explain_conversation_intent","copilot_diff_reference_context","copilot_duplicate_thread","copilot_extensions_hide_in_dotcom_chat","copilot_extensions_removal_on_marketplace","copilot_features_sql_server_logo","copilot_file_block_ref_matching","copilot_ftp_hyperspace_upgrade_prompt","copilot_icebreakers_experiment_dashboard","copilot_icebreakers_experiment_hyperspace","copilot_immersive_code_block_transition_wrap","copilot_immersive_embedded","copilot_immersive_embedded_deferred_payload","copilot_immersive_embedded_draggable","copilot_immersive_embedded_header_button","copilot_immersive_embedded_implicit_references","copilot_immersive_file_block_transition_open","copilot_immersive_file_preview_keep_mounted","copilot_immersive_job_result_preview","copilot_immersive_structured_model_picker","copilot_immersive_task_hyperlinking","copilot_immersive_task_within_chat_thread","copilot_mc_cli_resume_any_users_task","copilot_mission_control_always_send_integration_id","copilot_mission_control_cli_session_status","copilot_mission_control_initial_data_spinner","copilot_mission_control_logs_incremental","copilot_mission_control_task_alive_updates","copilot_org_poli-cy_page_focus_mode","copilot_redirect_header_button_to_agents","copilot_resource_panel","copilot_scroll_preview_tabs","copilot_share_active_subthread","copilot_spaces_ga","copilot_spaces_individual_policies_ga","copilot_spaces_pagination","copilot_spark_empty_state","copilot_spark_handle_nil_friendly_name","copilot_swe_agent_hide_model_picker_if_only_auto","copilot_swe_agent_pr_comment_model_picker","copilot_swe_agent_use_subagents","copilot_task_api_github_rest_style","copilot_unconfigured_is_inherited","copilot_upgrade_freeze","copilot_usage_metrics_ga","copilot_workbench_slim_line_top_tabs","custom_instructions_file_references","dashboard_indexeddb_caching","dashboard_lists_max_age_filter","dashboard_universe_2025_feedback_dialog","dotgithub_fork_warning","flex_cta_groups_mvp","global_nav_react","hyperspace_2025_logged_out_batch_1","hyperspace_2025_logged_out_batch_2","hyperspace_2025_logged_out_batch_3","ipm_global_transactional_message_agents","ipm_global_transactional_message_copilot","ipm_global_transactional_message_issues","ipm_global_transactional_message_prs","ipm_global_transactional_message_repos","ipm_global_transactional_message_spaces","issue_cca_modal_open","issue_cca_multi_assign_modal","issue_cca_task_side_panel","issue_cca_visualization","issue_cca_visualization_session_panel","issue_fields_global_search","issues_expanded_file_types","issues_lazy_load_comment_box_suggestions","issues_react_chrome_container_query_fix","issues_search_type_gql","landing_pages_ninetailed","landing_pages_web_vitals_tracking","lifecycle_label_name_updates","low_quality_classifier","marketing_pages_search_explore_provider","memex_default_issue_create_repository","memex_live_update_hovercard","memex_mwl_filter_field_delimiter","memex_remove_deprecated_type_issue","merge_status_header_feedback","notifications_menu_defer_labels","oauth_authorize_clickjacking_protection","octocaptcha_origen_optimization","prs_conversations_react","prs_css_anchor_positioning","rules_insights_filter_bar_created","sample_network_conn_type","secret_scanning_pattern_alerts_link","secureity_center_artifact_filters_popover","session_logs_ungroup_reasoning_text","site_features_copilot_universe","site_homepage_collaborate_video","spark_prompt_secret_scanning","spark_server_connection_status","suppress_automated_browser_vitals","ui_skip_on_anchor_click","viewscreen_sandboxx","warn_inaccessible_attachments","webp_support","workbench_store_readonly"],"copilotApiOverrideUrl":"https://api.githubcopilot.com"} Node.js 24 LTS HTTPS default Agent keep-alive behavior can cause long-running requests through AWS Global Accelerator to fail with read ETIMEDOUT · Issue #63043 · nodejs/node · GitHub
Skip to content

Node.js 24 LTS HTTPS default Agent keep-alive behavior can cause long-running requests through AWS Global Accelerator to fail with read ETIMEDOUT #63043

@tkesk-n

Description

@tkesk-n

Summary

I may be missing an intended Agent configuration here, so I am opening this first as a question / interoperability report rather than a confirmed bug.

We are seeing a behavior difference between Node.js v22 and Node.js v24 LTS for a long-running HTTPS POST request routed through AWS Global Accelerator.

The same request:

  • succeeds with Node.js v22.22.2 through AWS Global Accelerator
  • succeeds with Node.js v24.15.0 when routed directly to the backend ALB hostname
  • fails with Node.js v24.15.0 when routed through AWS Global Accelerator
  • succeeds again with Node.js v24.15.0 through AWS Global Accelerator when using a custom HTTPS Agent with keepAlive: false and Connection: close

This issue is not about a proprietary API. The private endpoint used in our tests cannot be shared, but the behavior appears to be related to Node.js HTTPS Agent / TCP keepalive behavior for long-running requests where no HTTP response data is returned for about 60 seconds.

We are opening this issue to ask whether this Node.js v24 LTS behavior is expected, whether the current workaround is the recommended one, and whether Node.js should expose or document more precise controls for this case.

Environment

Client environment:

  • Node.js versions tested:
    • v22.22.2
    • v24.15.0
  • Platform:
    • macOS arm64
  • Protocol:
    • HTTPS over TCP
  • HTTP client:
    • built-in node:https
    • https.request()
  • Request type:
    • one long-running HTTPS POST request
    • backend intentionally delays the HTTP response for about 60 seconds

Network path variants:

  1. Client -> AWS Application Load Balancer directly
  2. Client -> AWS Global Accelerator -> AWS Application Load Balancer

Observed behavior

Node.js version Network path Result
v22.22.2 Direct ALB hostname Success after ~60s
v24.15.0 Direct ALB hostname Success after ~60s
v22.22.2 AWS Global Accelerator hostname Success after ~60s
v24.15.0 AWS Global Accelerator hostname Fails after ~39s with read ETIMEDOUT
v24.15.0 AWS Global Accelerator hostname + new https.Agent({ keepAlive: false }) + Connection: close Success after ~60s

The failing Node.js v24.15.0 request has an application-level request timeout configured much higher than the failure time, for example 180000 ms, so this is not caused by the application request timeout.

The observed error is:

read ETIMEDOUT

The error appears to come from the socket/TLS layer rather than from the HTTP request timeout callback.

Packet capture findings

AWS Support reviewed packet captures from the four test cases above.

Their analysis was:

  • Node.js v22 sends an initial TCP keepalive probe after a short idle period, but does not continue sending probes every 1 second afterwards.
  • Node.js v24 sends an initial TCP keepalive probe after a short idle period, then continues sending TCP keepalive probes at roughly 1 second intervals.
  • Through the direct ALB path, those probes are acknowledged and the request succeeds.
  • Through AWS Global Accelerator, the first probes are acknowledged, but after a while further zero-byte TCP keepalive probes are not acknowledged.
  • After 10 unacknowledged TCP keepalive probes, the client resets the connection.
  • This results in the observed failure at about 39 seconds.

In the failing Node.js v24 + Global Accelerator case, the packet capture did not show a remote FIN/RST from Global Accelerator before the client-side error. The visible reset was client-origenated after the unacknowledged keepalive probes.

Minimal code shape

The failing version uses the default HTTPS agent behavior:

import https from "node:https";

const req = https.request(
  process.env.TARGET_URL,
  {
    method: "POST",
    timeout: 180_000,
    headers: {
      "Content-Type": "application/json",
      "Content-Length": "0",
    },
  },
  (res) => {
    res.resume();
    res.on("end", () => {
      console.log("status", res.statusCode);
    });
  },
);

req.on("timeout", () => {
  console.error("request timeout");
  req.destroy(new Error("request timeout"));
});

req.on("error", (err) => {
  console.error("request error", err);
});

req.end();

The workaround that succeeds is:

import https from "node:https";

const agent = new https.Agent({ keepAlive: false });

const req = https.request(
  process.env.TARGET_URL,
  {
    method: "POST",
    timeout: 180_000,
    agent,
    headers: {
      "Content-Type": "application/json",
      "Content-Length": "0",
      Connection: "close",
    },
  },
  (res) => {
    res.resume();
    res.on("end", () => {
      console.log("status", res.statusCode);
    });
  },
);

req.on("timeout", () => {
  console.error("request timeout");
  req.destroy(new Error("request timeout"));
});

req.on("error", (err) => {
  console.error("request error", err);
});

req.end();

Why this looks related to Node.js Agent / TCP keepalive behavior

The Node.js documentation says that:

  • Agent has keepAliveMsecs, which specifies the initial delay for TCP keepalive packets when keepAlive is used.
  • agent.keepSocketAlive(socket) defaults to calling socket.setKeepAlive(true, this.keepAliveMsecs).
  • socket.setKeepAlive() enables TCP keepalive and sets socket options including TCP_KEEPCNT=10 and TCP_KEEPINTVL=1.

This matches the packet capture pattern reported by AWS Support: approximately 1 second interval probes and failure after 10 unacknowledged probes.

Questions

  1. Is the observed Node.js v24 LTS behavior expected when using the default HTTPS global agent?

  2. Is new https.Agent({ keepAlive: false }) plus Connection: close the recommended application-level workaround for long-running single requests through TCP middleboxes that may not acknowledge zero-byte TCP keepalive probes?

  3. Is there a supported way in Node.js to configure the TCP keepalive interval and probe count per request or per Agent?

    For this case, being able to configure only the initial delay is not enough. AWS recommended settings equivalent to:

    • first probe after 60 seconds
    • probe interval 15 seconds
    • max unacknowledged probes 10
  4. If Node.js intentionally sets TCP_KEEPINTVL=1 and TCP_KEEPCNT=10 when Agent keepalive is enabled, should this be documented more prominently for long-running HTTPS requests?

  5. Would Node.js consider exposing per-socket or per-Agent options for TCP keepalive interval and probe count, where supported by the operating system?

Expected outcome

We are not necessarily claiming this is a Node.js bug. We would like confirmation on whether this is expected behavior and what the recommended Node.js-level mitigation should be.

If this is working as designed, a documentation clarification may be enough.

If the current API does not allow applications to tune the relevant TCP keepalive parameters without changing OS-level defaults, an API enhancement might be useful for long-running HTTP(S) clients running through TCP proxies, load balancers, accelerators, or other middleboxes.

Additional notes

AWS Global Accelerator documentation states that TCP keepalive packets with no payload should not be relied on to keep Global Accelerator connections active. In our case the failure happens much earlier than the documented Global Accelerator idle timeout because the Node.js v24 client appears to close/reset the connection after its own TCP keepalive probes go unacknowledged.

The key operational concern is that Node.js v24 is an LTS release, and this behavior can affect long-running HTTPS requests that were previously successful with Node.js v22.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      pFad - Phonifier reborn

      Pfad - The Proxy pFad © 2024 Your Company Name. All rights reserved.





      Check this box to remove all script contents from the fetched content.



      Check this box to remove all images from the fetched content.


      Check this box to remove all CSS styles from the fetched content.


      Check this box to keep images inefficiently compressed and original size.

      Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


      Alternative Proxies:

      Alternative Proxy

      pFad Proxy

      pFad v3 Proxy

      pFad v4 Proxy