Searched refs:inference (Results 1 – 7 of 7) sorted by relevance
87 u32 inference; member110 lp->inference = 0; in tcp_lp_init()284 lp->inference = 3 * delta; in tcp_lp_pkts_acked()287 if (lp->last_drop && (now - lp->last_drop < lp->inference)) in tcp_lp_pkts_acked()
15 designed to accelerate Deep Learning inference workloads.
18 designed to accelerate Deep Learning inference and training workloads.
15 is a CPU-integrated inference accelerator for Computer Vision
19 - Edge AI - doing inference at an edge device. It can be an embedded ASIC/FPGA,
15 AMD NPU (Neural Processing Unit) is a multi-user AI inference accelerator
13 inference workloads. They are AI accelerators.