缓存容量
- 网络Cache capacity;buffer size;Buffer Memory Size
-
基于TCP协议模型的路由器缓存容量设置方法
A TCP protocol model based method for setting up buffer size of router
-
该文提出了一种基于客户端缓存容量的分区流切入算法。
This paper proposes a novel algorithm called Partition Stream Tapping Based on Client Buffer Size .
-
采用FIFO级联实现可编程的采样预触发与缓存容量扩展
Implementing Programming Pre-Triggering and Expanding Capacity of Storage Using FIFO in Series
-
设计了嵌入式多核SoC(?)队网络模型评估算法,得出在能够获得最佳的系统性能时,所需的硬件缓存容量的最佳设置值。
The evaluation algorithm of queueing network model is fulfilled for embedded multi-core SoC . When the system is able to get the best system performance , the optimal values of the required hardware buffer capacity are set .
-
但在当前提出的全光缓存器设计中,FDL利用率即缓存容量与所使用的FDL总长度之比是相当低的(通常为2/N,其中N为输入端口数)。
However , the FDL efficiency , i.e. the ratio of total FDL length to buffer depth is rather low in current optical buffer designs ( normally 2 / N , where Nis the number of input port ) .
-
客户端缓存容量的分区流切入算法
Partition Stream Tapping Based on Client Buffer Size
-
驱动缓存容量和算法。
Drive cache capacity and algorithms .
-
网格映射防止任意一个数据网格占用所有可用的弹性缓存容量。
Grid capping prevents any one data grid from consuming all of the available elastic cache capacity .
-
通过以客户端缓存容量的大小作为分区依据将原始流分成若干切入区,进一步提高了通道利用率。
This algorithm outperforms stream tapping in channel utilization efficiency by partitioning the original stream into several tapping partitions in the size of client buffer .
-
对于硬件模块中的缓存容量设计,本文针对数据驱动控制技术,提出了缓存设计的约束条件,给出了三种设计方法并比较了其间的异同。
The constraint of buffer design is given in this thesis . The paper brings forward three buffer design methods , and compares them with each other .
-
根据接收端的缓存容量的大小调整滑动窗口值,进而改变发送端的发送速率,降低网络流量。
Size of the sliding window is adjusted according to the capacity of receiver buffers , thus altering the sending speed of senders and lowering network traffic rate .
-
驱动器上的缓存容量、所使用的硬盘算法、接口速度和磁录密度组合到一起,就构成了磁盘传输时间。
When combined , the cache capacity on the drive , the disk algorithms used , the interface speed , and the areal density , produce a disk transfer time .
-
本文讨论数据压缩传真机缓冲存贮器的动态过程与排队模型,推导出缓存容量的计算公式。
In this paper , the dynamic process and queuing model for buffer memory of data compression facsimiles is discussed arrd computing formula of contents for buffer memory is introduced .
-
然而,在单芯片上集成越来越多的处理器内核增加了对两个关键资源的需求:共享二级缓存容量和片外引脚带宽。
However , the increasing number of processors cores on a single chip increases the demand on two critical resources : the shared L2 cache capacity and the off-chip pin bandwidth .
-
实践表明流式传输不仅使启动延时成十倍、百倍地缩短,而且不需要客户机有太大的缓存容量。
It is indicated that streaming transfers could make postpone shorten decuple , hundredfold , and the customer 's machine does not need big slowly capacity of save in the practice .
-
针对FMS中输入/输出缓存区容量有限的约束下建立单AGV在某一时刻内未完成的搬运任务的调度问题,建立了数学模型,目标是AGV完成所有任务的时间最短。
Under the constraint of input / output buffer with limited capacity in FMS , mathematical model has established that a single AGV finish all unfinished handling tasks in the shortest time within a certain time .
-
由于缓存的容量通常远远小于I/O路径中下一级存储器的容量,当缓存满了以后,需要在一个恰当的时机选择恰当的数据进行淘汰,回收缓存空间以存储新的数据。
Because cache is usually much smaller than the storage on the next level of I / O path , when cache is full , an appropriate data block needs to be discarded to reclaim space for new data blocks at an appropriate time .
-
实例的最大数量是可配置的,这样就可以控制缓存服务的容量上限。
The maximum number of instances is configurable so that the capacity of the caching service can be capped .
-
由于缓存服务器的容量有限,把所有内容缓存下来是不现实的,所以缓存服务器只能通过某些策略来来替换访问率低的内容,存储重复访问的可能性高的内容。
Due to the limited capacity of the cache server , it is not realistic to cache all the contents . Therefore , the cache can only replace the contents of low access rate and store the contents of high possibility of repeated visit by some strategy .
-
这个缓存的最大存储容量是1000个元素。
This cache has a maximum of1000 elements in memory .
-
控制逻辑根据各个组的数据规模为其动态分配缓存资源,并通过可用缓存容量监测拥塞状态。
The data sent to different output ports are stored in the form of " group ", and the control logic dynamic allocates buffer resources according to the data size in various " group " .
-
其次,采用无缓存的交换节点结构来减少缓存容量、降低芯片实现成本,使报文在交换节点传输延迟缩减为一个时钟周期。
Secondly , the unbuffered switch node architecture can effectively reduce the chip cost . And the packets latency in each switch node would be reduced to one clock cycle .