Uploaded image for project: 'Openshift sandboxed containers'
  1. Openshift sandboxed containers
  2. KATA-3457

Sample image ocp-cc-pod does not work

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: High High
    • None
    • None
    • Internal, podvm-builder, trustee
    • None
    • False
    • None
    • False
    • 0
    • 0

      Description

      OCP 4.16.20

      When I  install trustee, I set peer-pods-cm to have 

      AA_KBC_PARAMS: "cc_kbc::http://kbs-service:8080" 

       

      then I install kataconfig.

      Then I try to run the workload which fails with

       

      0s (x2 over 1s)   Normal    Pulled           Pod/ocp-cc-pod   Container image "registry.access.redhat.com/ubi9/ubi:9.3" already present on machine
      0s (x2 over 1s)   Warning   Failed           Pod/ocp-cc-pod   Error: CreateContainer failed: ttrpc client init failed
      
      Caused by:
          0: Nix error: ENOENT: No such file or directory
          1: ENOENT: No such file or directory
      
      Stack backtrace:
         0: anyhow::error::<impl core::convert::From<E> for anyhow::Error>::from
         1: image_rs::image::ImageClient::pull_image::{{closure}}.17591
         2: <kata_agent::storage::image_pull_handler::ImagePullHandler as kata_agent::storage::StorageHandler>::create_device::{{closure}}
         3: kata_agent::storage::add_storages::{{closure}}
         4: kata_agent::rpc::AgentService::do_create_container::{{closure}}::{{closure}}
         5: <kata_agent::rpc::AgentService as protocols::agent_ttrpc_async::AgentService>::create_container::{{closure}}
         6: <protocols::agent_ttrpc_async::CreateContainerMethod as ttrpc::asynchronous::utils::MethodHandler>::handler::{{closure}}
         7: ttrpc::asynchronous::server::HandlerContext::handle_msg::{{closure}}
         8: <ttrpc::asynchronous::server::ServerReader as ttrpc::asynchronous::connection::ReaderDelegate>::handle_msg::{{closure}}::{{closure}}
         9: tokio::runtime::task::raw::poll
        10: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
        11: tokio::runtime::task::raw::poll
        12: std::sys_common::backtrace::__rust_begin_short_backtrace
        13: core::ops::function::FnOnce::call_once{{vtable.shim}}
        14: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
                   at ./builddir/build/BUILD/rustc-1.75.0-src/library/alloc/src/boxed.rs:2007:9
        15: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
                   at ./builddir/build/BUILD/rustc-1.75.0-src/library/alloc/src/boxed.rs:2007:9
        16: std::sys::unix::thread::Thread::new::thread_start
                   at ./builddir/build/BUILD/rustc-1.75.0-src/library/std/src/sys/unix/thread.rs:108:17
        17: start_thread
        18: clone3: unknown
       

       

      If I change peer-pods-cm to a TRUSTEE_HOST off cluster that is known to be working

      AA_KBC_PARAMS: "cc_kbc::https://${TRUSTEE_HOST}" 

       

      I then reboot the caa-daemon with

      oc set env ds/peerpodconfig-ctrl-caa-daemon -n openshift-sandboxed-containers-operator REBOOT="$(date)"

      And launch a new pod, it fails the same way.

       

      I've also tried deleting and recreating kataconfig and get the same failure.

      I've also tried the test-signed-image in the internal docs

       

       

       

       

      Steps to reproduce

      <What actions did you take to hit the bug?>
      1.
      2.
      3.

      Expected result

      <What did you expect to happen?>

      Actual result

      <What actually happened?>

      Impact

      <How badly does this interfere with using the software?>

      Env

      <Where was the bug found, i.e. OCP build, operator build, kata-containers build, cluster infra, test case id>

      Additional helpful info

      <logs, screenshot, doc links, etc.>

              Unassigned Unassigned
              tbuskey-rh Tom Buskey
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: