景色奇异的异是什么意思| 颈椎病用什么枕头最好| 梦见财神爷是什么预兆| 梦到蜈蚣是什么意思| 月经期间吃什么水果| 点状强回声是什么意思| 老年人适合喝什么茶| 北京晚上有什么好玩的景点| 儿童咳嗽挂什么科| 做梦梦见出车祸是什么征兆| 包煎是什么意思| 雌二醇凝胶有什么作用| 夏天吃什么水果| pe什么材质| 喉咙有异物挂什么科| 翡翠属于什么玉| 五行属性是什么| 寻麻疹是什么| ad是什么病的简称| 肺部感染有什么症状| secret是什么意思| 右眼睛总跳是什么原因| 脸书是什么意思| 左手发麻是什么病征兆| 夏至有什么习俗| 喝酒容易醉是什么原因| ich是什么意思| 梦见买馒头是什么意思| ppa是什么| 水厄痣是什么意思| 鸡蛋吃多了有什么坏处| 小满是什么季节| 妈妈的姐妹叫什么| 为什么会起鸡皮疙瘩| olayks是什么牌子| 男孩小名叫什么好听| 重庆沱茶属于什么茶| 一喝牛奶就拉肚子是什么原因| 属马与什么属相最配| 饮用水是什么水| 兴渠是什么菜| 8月27日什么星座| 轻浮是什么意思| 吃四方是什么生肖| 盐巴是什么| 什么是月食| 慧五行属什么| 甘露醇治什么病| 胸痹是什么意思| 货值是什么意思| 母猪上树是什么生肖| 什么食物含铁量最高| 维生素d有什么作用| 大校上面是什么军衔| 挚友是指什么的朋友| 长鱼是什么鱼| 为什么会得风湿| 人生开挂是什么意思| 熟啤酒是什么意思| 免贵姓是什么意思| 肽是什么| 猫咪呕吐吃什么药| 四个月念什么| 什么叫飞机杯| 2007年是什么命| 2004属什么生肖| 一直干呕是什么原因| 印度为什么用手吃饭| 绝经后子宫内膜增厚是什么原因| 打耳洞需要注意什么| 鬼蝴蝶为什么不能抓| 慢性宫颈炎用什么药| 省公安厅副厅长是什么级别| 蛇头是什么意思| 脑白质缺血性改变什么意思| 梦见买白菜是什么意思| wonderland是什么意思| 成都市花是什么花| 一什么所什么| 九牛一毛是什么意思| 瓜子脸适合什么眼镜| 什么是乌龙茶| 排卵期出血吃什么药| 九月十二号是什么星座| 什么是木薯粉| 千里江陵是什么意思| 坐蜡什么意思| 农村做什么致富| 下面瘙痒用什么药| 知了在树上干什么| 红豆泥是什么意思| 全棉和纯棉有什么区别| 癫痫是什么| 胆囊壁增厚吃什么药| 看指甲挂什么科| 吃什么能补雌激素| 脑梗吃什么药效果最好| 交链孢霉过敏是什么| 宇舶手表什么档次| 叶酸不能和什么一起吃| 美国现在什么季节| 辩证思维是什么意思| 口蜜腹剑是什么意思| 薤白的俗名叫什么| 鲁班发明了什么东西| 什么情况下需要会诊| 西米是什么做成的| 投诚是什么意思| 德国为什么发动二战| 口腔溃疡补充什么维生素| 浩瀚是什么意思| 什么样的雨珠| 婧是什么意思| 来龙去脉是什么生肖| 降调针什么时候打| 梦见已故长辈什么预兆| 包场是什么意思| 56岁属什么| 竖中指什么意思| 胃热吃什么药最有效| 电疗有什么作用和功效| 过期药品属于什么垃圾| 胸下面是什么部位| 眉毛上长痘是什么原因| 10月6日是什么星座| 怀孕生化了有什么症状| 交叉感染是什么意思| 1969年属什么生肖| 12月2日什么星座| 丙肝为什么会自愈| 梦见去扫墓是什么预兆| 山楂不能和什么一起吃| 毛拉是什么意思| 女生额头长痘痘是什么原因| 脆哨是什么| joway是什么牌子| sigma是什么牌子| 主动脉壁钙化是什么意思| 会厌炎吃什么药最有效| 公价是什么意思| 黑卡是什么| 什么玉最好有灵性养人| 女人为什么会患得患失| mlf是什么意思| 绿色和红色混合是什么颜色| 1月20是什么星座| 熊是什么生肖| 肠胃炎发烧吃什么药| 云南白药有什么功效| 新生儿黄疸高是什么原因| 麦芽糖是什么糖| 手腕发麻是什么原因| 夜盲症是什么意思| fan是什么意思| 第一胎打掉会有什么影响| 蜻蜓像什么| 北漂是什么意思| 1948年中国发生了什么| 贤内助什么意思| 凉爽的什么| 什么地站着| 研究生体检都检查什么| 胰岛素ins是什么意思| 肺的主要功能是什么| mpe是什么意思| 颈椎病用什么药| 血糖高能吃什么水果| 淋巴细胞偏低是什么原因| 现在什么冰箱最好| 什么大什么功| 气虚便秘吃什么中成药| 迷妹是什么意思| 郭靖黄蓉是什么电视剧| 姝字五行属什么的| 殿试是什么意思| 一什么雨伞| elle中文叫什么| 狗癣用什么药最有效| 地什么人什么| 嗓子有异物感堵得慌吃什么药| 01属什么| 甲状腺吃什么食物好| 榴莲什么季节成熟| 阳瘘的最佳治疗方法是什么| 脚气脱皮用什么药最好| 胎盘位于后壁是什么意思| 匪夷所思是什么意思| 血小板压积是什么意思| aki是什么意思| 微波炉不热是什么原因| 脂肪酸是什么| 99年是什么年| 大便干燥用什么药| 什么是直系亲属| 三月五号是什么星座| 什么叫平年什么叫闰年| 十二月六号是什么星座| 青出于蓝是什么意思| 512是什么节日| 肝囊肿是什么| 坐飞机需要什么证件| 退行性改变是什么意思| 急性寻麻疹用什么药| 霉菌性中耳炎用什么药| 忌口是什么意思| 石女是什么样子的| 痣长什么样| 嗓子疼吃什么水果| 前列腺吃什么药见效快| 囊性无回声是什么意思| who医学上是什么意思| 什么鬼什么神| 为什么不建议吃茵栀黄| 经常吃豆腐有什么好处和坏处| 口腔溃疡吃什么药好| 女人脚抽筋是什么原因| 妙手回春是什么意思| 耳朵轮廓痒是什么原因| 大放厥词是什么意思| 有蛇进屋预兆着什么| 什么样的毛刺是良性的| 脸颊两侧长痘痘什么原因| 什么颜色加什么颜色等于蓝色| 白身是什么意思| 水淀粉是什么东西| 雌二醇低是什么原因造成的| 什么是增强ct| 珍惜当下是什么意思| 什么油最健康| 待见是什么意思| 焦虑会引起什么症状| 尚清是什么意思| 沙棘是什么| 一什么人家| 营业执照什么时候年审| 尿糖一个加号是什么意思| 毒鸡汤是什么意思| 金牛座和什么星座不合| 中午适合吃什么| 粽子叶是什么植物的叶子| 滴水不漏什么意思| 什么样的羽毛| 内外兼修是什么意思| 身上发痒是什么原因| 比肩劫财是什么意思| 香蕉有什么作用与功效| 莆田荔枝什么时候成熟| 羊水破了有什么感觉| ccb是什么药物| 千里单骑是什么生肖| 中水是什么意思| 血红素高是什么原因| 纳音什么意思| 为什么经常做梦| 六十天打一字是什么字| 脚趾头抽筋是什么原因| 桂花是什么颜色| 壁虎代表什么生肖| 恍然大悟什么意思| 什么睡姿有助于丰胸| 脸上皮肤痒是什么原因| 老年人喝什么蛋白粉好| 卤门什么时候闭合| 大败是什么意思| 百度Jump to content

成都:坚决杜绝义务教育阶段学校食堂对外承包

From Wikipedia, the free encyclopedia
百度 希望有越来越多的民族运动品牌走出国门,「国货当自强」。

In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.

For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important.

For example, cycle sort and timsort are both algorithms to sort a list of items from smallest to largest. Cycle sort organizes the list in time proportional to the number of elements squared (, see Big O notation), but minimizes the writes to the original array and only requires a small amount of extra memory which is constant with respect to the length of the list (). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length (), but has a space requirement linear in the length of the list (). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the program/erase cycles and memory footprint of the sorting is more important, cycle sort is a better choice.

Background

[edit]

The importance of efficiency with respect to time was emphasized by Ada Lovelace in 1843 as applied to Charles Babbage's mechanical analytical engine:

"In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"[1]

Early electronic computers had both limited speed and limited random access memory. Therefore, a space–time trade-off occurred. A task could use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was therefore to use the fastest algorithm that could fit in the available memory.

Modern computers are significantly faster than early computers and have a much larger amount of memory available (gigabytes instead of kilobytes). Nevertheless, Donald Knuth emphasized that efficiency is still an important consideration:

"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"[2]

Overview

[edit]

An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to the approximate doubling of computer power every 2 years, tasks that are acceptably efficient on modern smartphones and embedded systems may have been unacceptably inefficient for industrial servers 10 years ago.

Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer.

There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption, total cost of ownership, response time to external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, some sorting algorithms perform poorly on data which is already sorted, or which is sorted in reverse order.

In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate to optimization issues.

Theoretical analysis

[edit]

In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" is Donald Knuth's Big O notation, representing the complexity of an algorithm as a function of the size of the input . Big O notation is an asymptotic measure of function complexity, where roughly means the time requirement for an algorithm is proportional to , omitting lower-order terms that contribute less than to the growth of the function as grows arbitrarily large. This estimate may be misleading when is small, but is generally sufficiently accurate when is large as the notation is asymptotic. For example, bubble sort may be faster than merge sort when only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms that scale efficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs.

Some examples of Big O notation applied to algorithms' asymptotic time complexity include:

Notation Name Examples
constant Finding the median from a sorted list of measurements; Using a constant-size lookup table; Using a suitable hash function for looking up an item.
logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap.
linear Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two n-bit integers by ripple carry.
linearithmic, loglinear, or quasilinear Performing a Fast Fourier transform; heapsort, quicksort (best and average case), or merge sort
quadratic Multiplying two n-digit numbers by a simple algorithm; bubble sort (worst case or naive implementation), Shell sort, quicksort (worst case), selection sort or insertion sort
exponential Finding the optimal (non-approximate) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search

Measuring performance

[edit]

For new versions of software or to provide comparisons with competitive systems, benchmarks are sometimes used, which assist with gauging an algorithms relative performance. If a new sort algorithm is produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in the mainframe world certain proprietary sort products from independent software companies such as Syncsort compete with products from the major suppliers such as IBM for speed.

Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example[3][4] and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages.

Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example.[5]

Implementation concerns

[edit]

Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded,[6] or the choice of a compiler for a particular language, or the compilation options used, or even the operating system being used. In many cases a language implemented by an interpreter may be much slower than a language implemented by a compiler.[3] See the articles on just-in-time compilation and interpreted languages.

There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these include data alignment, data granularity, cache locality, cache coherency, garbage collection, instruction-level parallelism, multi-threading (at either a hardware or software level), simultaneous multitasking, and subroutine calls.[7]

Some processors have capabilities for vector processing, which allow a single instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use of parallel processing, or they could be easily reconfigured. As parallel and distributed computing grow in importance in the late 2010s, more investments are being made into efficient high-level APIs for parallel and distributed computing systems such as CUDA, TensorFlow, Hadoop, OpenMP and MPI.

Another problem which can arise in programming is that processors compatible with the same instruction set (such as x86-64 or ARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges to optimizing compilers, which must have extensive knowledge of the specific CPU and other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced to emulate instructions not supported on a compilation target platform, forcing it to generate code or link an external library call to produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case in embedded systems with respect to floating-point arithmetic, where small and low-power microcontrollers often lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations.

Measures of resource usage

[edit]

Measures are normally expressed as a function of the size of the input .

The two most common measures are:

  • Time: how long does the algorithm take to complete?
  • Space: how much working memory (typically RAM) is needed by the algorithm? This has two aspects: the amount of memory needed by the code (auxiliary space usage), and the amount of memory needed for the data on which the code operates (intrinsic space usage).

For computers whose power is supplied by a battery (e.g. laptops and smartphones), or for very long/large calculations (e.g. supercomputers), other measures of interest are:

  • Direct power consumption: power needed directly to operate the computer.
  • Indirect power consumption: power needed for cooling, lighting, etc.

As of 2018, power consumption is growing as an important metric for computational tasks of all types and at all scales ranging from embedded Internet of things devices to system-on-chip devices to server farms. This trend is often referred to as green computing.

Less common measures of computational efficiency may also be relevant in some cases:

  • Transmission size: bandwidth could be a limiting factor. Data compression can be used to reduce the amount of data to be transmitted. Displaying a picture or image (e.g. Google logo) can result in transmitting tens of thousands of bytes (48K in this case) compared with transmitting six bytes for the text "Google". This is important for I/O bound computing tasks.
  • External space: space needed on a disk or other external memory device; this could be for temporary storage while the algorithm is being carried out, or it could be long-term storage needed to be carried forward for future reference.
  • Response time (latency): this is particularly relevant in a real-time application when the computer system must respond quickly to some external event.
  • Total cost of ownership: particularly if a computer is dedicated to one particular algorithm.

Time

[edit]

Theory

[edit]

Analysis of algorithms, typically using concepts like time complexity, can be used to get an estimate of the running time as a function of the size of the input data. The result is normally expressed using Big O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance. Parallel algorithms may be more difficult to analyze.

Practice

[edit]

A benchmark can be used to assess the performance of an algorithm in practice. Many programming languages have an available function which provides CPU time usage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests.

Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in a multi-processing and multi-programming environment.

This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions.

Space

[edit]

This section is concerned with use of memory resources (registers, cache, RAM, virtual memory, secondary memory) while the algorithm is being executed. As for time analysis above, analyze the algorithm, typically using space complexity analysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed using Big O notation.

There are up to four aspects of memory usage to consider:

  • The amount of memory needed to hold the code for the algorithm.
  • The amount of memory needed for the input data.
  • The amount of memory needed for any output data.
    • Some algorithms, such as sorting, often rearrange the input data and do not need any additional space for output data. This property is referred to as "in-place" operation.
  • The amount of memory needed as working space during the calculation.

Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949 Electronic Delay Storage Automatic Calculator (EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 Sinclair ZX80 came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical for personal computers to have between 4 and 32 GB of RAM, an increase of over 300 million times as much memory.

Caching and memory hierarchy

[edit]

Modern computers can have relatively large amounts of memory (possibly gigabytes), so having to squeeze an algorithm into a confined amount of memory is not the kind of problem it used to be. However, the different types of memory and their relative access speeds can be significant:

  • Processor registers, are the fastest memory with the least amount of space. Most direct computation on modern computers occurs with source and destination operands in registers before being updated to the cache, main memory and virtual memory if needed. On a processor core, there are typically on the order of hundreds of bytes or fewer of register availability, although a register file may contain more physical registers than architectural registers defined in the instruction set architecture.
  • Cache memory is the second fastest, and second smallest, available in the memory hierarchy. Caches are present in processors such as CPUs or GPUs, where they are typically implemented in static RAM, though they can also be found in peripherals such as disk drives. Processor caches often have their own multi-level hierarchy; lower levels are larger, slower and typically shared between processor cores in multi-core processors. In order to process operands in cache memory, a processing unit must fetch the data from the cache, perform the operation in registers and write the data back to the cache. This operates at speeds comparable (about 2-10 times slower) with the CPU or GPU's arithmetic logic unit or floating-point unit if in the L1 cache.[8] It is about 10 times slower if there is an L1 cache miss and it must be retrieved from and written to the L2 cache, and a further 10 times slower if there is an L2 cache miss and it must be retrieved from an L3 cache, if present.
  • Main physical memory is most often implemented in dynamic RAM (DRAM). The main memory is much larger (typically gigabytes compared to ≈8 megabytes) than an L3 CPU cache, with read and write latencies typically 10-100 times slower.[8] As of 2018, RAM is increasingly implemented on-chip of processors, as CPU or GPU memory.[citation needed]
  • Paged memory, often used for virtual memory management, is memory stored in secondary storage such as a hard disk, and is an extension to the memory hierarchy which allows use of a potentially larger storage space, at the cost of much higher latency, typically around 1000 times slower than a cache miss for a value in RAM.[8] While originally motivated to create the impression of higher amounts of memory being available than were truly available, virtual memory is more important in contemporary usage for its time-space tradeoff and enabling the usage of virtual machines.[8] Cache misses from main memory are called page faults, and incur huge performance penalties on programs.

An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to paging. Because of this, cache replacement policies are extremely important to high-performance computing, as are cache-aware programming and data alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another.

In the early days of electronic computing, if an algorithm and its data would not fit in main memory then the algorithm could not be used. Nowadays the use of virtual memory appears to provide much more memory, but at the cost of performance. Much higher speed can be obtained if an algorithm and its data fit in cache memory; in this case minimizing space will also help minimize time. This is called the principle of locality, and can be subdivided into locality of reference, spatial locality, and temporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well.

See also

[edit]

References

[edit]
  1. ^ Green, Christopher, Classics in the History of Psychology, retrieved 19 May 2013
  2. ^ Knuth, Donald (1974), "Structured Programming with go-to Statements" (PDF), Computing Surveys, 6 (4): 261–301, CiteSeerX 10.1.1.103.6084, doi:10.1145/356635.356640, S2CID 207630080, archived from the original (PDF) on 24 August 2009, retrieved 19 May 2013
  3. ^ a b "Floating Point Benchmark: Comparing Languages (Fourmilog: None Dare Call It Reason)". Fourmilab.ch. 4 August 2005. Retrieved 14 December 2011.
  4. ^ "Whetstone Benchmark History". Roylongbottom.org.uk. Retrieved 14 December 2011.
  5. ^ OSNews Staff. "Nine Language Performance Round-up: Benchmarking Math & File I/O". osnews.com. Retrieved 18 September 2018.
  6. ^ Kriegel, Hans-Peter; Schubert, Erich; Zimek, Arthur (2016). "The (black) art of runtime evaluation: Are we comparing algorithms or implementations?". Knowledge and Information Systems. 52 (2): 341–378. doi:10.1007/s10115-016-1004-2. ISSN 0219-1377. S2CID 40772241.
  7. ^ Guy Lewis Steele, Jr. "Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO". MIT AI Lab. AI Lab Memo AIM-443. October 1977.[1]
  8. ^ a b c d Hennessy, John L; Patterson, David A; Asanovi?, Krste; Bakos, Jason D; Colwell, Robert P; Bhattacharjee, Abhishek; Conte, Thomas M; Duato, José; Franklin, Diana; Goldberg, David; Jouppi, Norman P; Li, Sheng; Muralimanohar, Naveen; Peterson, Gregory D; Pinkston, Timothy Mark; Ranganathan, Prakash; Wood, David Allen; Young, Clifford; Zaky, Amr (2011). Computer Architecture: a Quantitative Approach (Sixth ed.). Elsevier Science. ISBN 978-0128119051. OCLC 983459758.
血光之灾是什么意思 猪八戒叫什么名字 肉毒为什么怕热敷 太后是皇上的什么人 2024年是什么命
牙周炎是什么 占位是什么意思 先兆临产是什么意思 ptc是什么 头发突然秃了一块是什么原因
自相矛盾的道理是什么 氢氧化钠是什么 右手长痣代表什么 女性尿急憋不住尿是什么原因 目加一笔是什么字
冷笑话是什么意思 木瓜有什么功效 芒果和什么不能一起吃 润六月是什么意思 虎毒不食子什么意思
伟岸一般形容什么人hcv8jop9ns4r.cn 大面念什么wmyky.com 指甲油什么牌子好hcv7jop9ns8r.cn 打两个喷嚏代表什么hcv8jop7ns2r.cn 哼哼唧唧是什么生肖hcv9jop4ns0r.cn
骨折和骨裂有什么区别hcv8jop0ns3r.cn 属马的本命佛是什么佛hcv7jop6ns1r.cn 甲醛中毒什么症状cl108k.com 林格液又叫什么hcv9jop0ns8r.cn 什么药止痒效果最好bysq.com
衙内是什么意思hcv8jop4ns1r.cn 做生意的人最忌讳什么hcv7jop5ns2r.cn 为什么同房后小腹疼痛hcv8jop0ns7r.cn 男生生日送什么礼物好hcv9jop8ns1r.cn 痛风有什么不能吃hcv8jop0ns5r.cn
腰扭伤吃什么药hcv8jop8ns9r.cn 今天属什么生肖老黄历hcv8jop4ns4r.cn 物理压榨油是什么意思hcv7jop9ns5r.cn 更年期吃什么药调理hcv7jop9ns7r.cn 前壁后壁有什么区别hcv9jop4ns1r.cn
百度