home | website map |   Blog
evaluation
  • When I met the difficulties in the statistical analysis of the pre-test and post-test of the thesis questionnaire, I couldn't solve them all the time. On Qimo, I saw that the doctoral teacher of whx statistics company provided the tutoring service of thesis statistics, and sent the information and requirements to the doctoral teacher. With the help and guidance of the teacher, I got the results. Thank you very much for the tutoring service of the doctoral teacher of whx statistics company There are two difficulties.
    Miss Liu
    from: Kaohsiung
  • It is difficult to analyze English papers by SPSS. On Google, we can see that the doctoral teacher of whx statistics company provides the paper statistics running and guidance, sends the data and analysis requirements to the teacher, communicates with the teacher, and runs smoothly with the teacher's guidance and help. Thank you very much for the help of the doctor teacher of whx statistics company. The teacher is serious, responsible and powerful.
    Miss Zeng
    from: london
  • Previously, a local statistics company in Hong Kong was entrusted, but it failed to do a good job in medical statistics. In my anxiety, I saw the doctor teacher of whx statistics company provide statistical running and guidance, and sent the data and requirements to the doctor teacher. With the efforts of the doctor teacher, the analysis results were successfully made. I feel that Taiwan's statistics companies are more professional and the fees are reasonable.
    Dr. Li
    from: hongkong
  • It's difficult to analyze Stata data in accounting papers. On Yahoo Qimo, I saw the doctor teacher of whx statistics company provide Stata statistical running guidance. The data and requirements were sent to the doctor's teacher. Under the guidance of the doctor's teacher, the Stata code was successfully designed, and the analysis results were also run out. I would like to thank the doctor's teacher of whx statistics for his high level guidance and assistance.
    Mr. Zhang
    from: taizhong
  • When a doctoral dissertation encounters multiple linear regression, it has always been a difficult point and there is nothing to do. On Google, we can see that the doctor teacher of whx statistics company provides statistical running and statistical guidance. The information and requirements to the doctor teacher, with the help of the teacher, smoothly run out the results. Thank you for the help and guidance from the doctor of whx statistics company.
    Dr. Li
    from: taoyuan
  • When I met the difficulties in the statistical analysis of the doctoral thesis questionnaire, I couldn't find a clue. Finally, I saw Whx's assistance in the statistical analysis of the thesis questionnaire on qimo.com. With the assistance of the doctor, I got the results smoothly. Thank Whx for his assistance in the statistical analysis of the thesis questionnaire, which solved my urgent problem.
    Dr. Li
    from: Kaohsiung
  • Run into the paper statistics, helpless. On Qimo, I saw that Whx company provided the assistance of thesis statistics, and with the guidance and assistance of the doctor's teacher, I ran smoothly to get the results. Thank Whx for his high-level service.
    Mr. Lin
    from: tainan
2021-08-22 15:55:01 | onclick: | Doctor of artificial intelligence on deep learning

L: student, t: teacher, Li: Li Xiaorong

Li: each of the three generations of artificial intelligence has its own origin, dependence and bias: the first generation depends on the understanding of mechanism rules, the second generation is based on relevant expert knowledge base, and the third generation, represented by in-depth learning, depends on large databases of relevant examples. The first two generations are now called "good old fashionable AI". They are driven by principles and knowledge respectively. The academic community has quite understood the difficulty and limitations of their practical use, so I won't repeat them here. The data-driven third generation is difficult to overcome any defects and limitations rooted in the database used. If the data is insufficient, bad, incomplete, incorrect or mismatched with the current task, the learning effect will not be good. For example, the data used for training or evaluation lacks pertinence (not enough to reflect the uniqueness of the current situation), is too old (no longer in line with the current situation), the volume is not large enough or the types are not rich enough, there are statistical deviations (too many or too few data in some categories), is too broad, incomplete, underrepresented, dead ends, misleading or qualitative errors, etc. It is also difficult to deal with the catastrophic black swan event with a very small probability. 1: if you have never encountered such an event in your study, you don't know how to deal with it (several fatal accidents of Tesla autonomous vehicles are rooted in this); Once such an incident should not have happened, after learning its disastrous consequences, they often think too much and overreact (once bitten by a snake, they are afraid of the well rope for ten years). These are the inherent weaknesses of data-driven methods. It can be seen that even if you only do one thing, the requirements of the deep learning network for the database may still be too high and unrealistic, let alone have communication. To say the least, even if it is applicable to Tongneng in principle, the qualified database required for training will be extremely large and unrealistic due to the variety of tasks.

S: what is "data has a dead corner"? Miss Li, can you explain?

Li: take the deep learning network of automatic driving as an example. Its input is mainly the information of environmental perception, self positioning and motion status. However, due to the limited cost, the sensors installed on the vehicle are limited, so it may not be able to collect all-round and sufficient useful information reliably and timely. This is a "data dead corner". Some accidents of Tesla autonomous vehicles are rooted in this.

On the other hand, old AI focuses on imitating the upper macro abstract functions of human intelligence. In contrast, the deep learning network is inspired by the specific microstructure of the lower layer of human brain in information processing. Its network structure and parameters are trained and determined by case data, and the natural form of its calculation output is continuous rather than discrete or logical. This is more consistent with the tasks of cognition and decision-making involving uncertain problems, but it is difficult to produce certain inevitable results. It is not very consistent with the abstract work requiring absolute purity and complete accuracy, such as logical calculus and reasoning deduction. It is also difficult to be competent to discover strict rules (such as machine proof), construct logical rules, create accurate concepts, update human knowledge Develop deep understanding, provide new ideas and other logical level or other high-level work. These are the shortcomings of statistical data training methods such as in-depth learning, but they can be made up by combining with other methods. In short, the old AI of "mechanism or knowledge oriented" is more suitable for the solution of abstract problems and hard rules and hard constraints based on book smart. It is difficult to deal with contradictory situations, and its progress depends more on the internal essence such as general mechanism. The "task oriented" in-depth learning network is more suitable for specific problems and soft rules and soft constraints based on street smart, uncertain, essentially "sloppy" or approximate. It focuses more on external manifestations such as practical effects. Naturally, it has achieved fruitful applications, but few theoretical achievements.

S: why does the deep learning network not match the abstract work requiring complete accuracy, such as logical calculus and reasoning?

Li: an artificial neural network (ANN) with undetermined connection weight is a function with undetermined parameters. Data training is to select the most appropriate one to fit the data. The result is an ANN with determined weight, that is, a function with determined parameters. The universal approximation theorem of ANN says that the set composed of feedforward single hidden layer or deep layer ANN is dense in the set of continuous functions, that is, for any continuous function, this ANN can approximate it with any accuracy. This is the main theoretical support of ANN method in power. However, the functions corresponding to completely accurate abstract work such as logical calculus, reasoning and deduction are not continuous, so Ann seems to lack complete theoretical guarantee for its approximate accuracy. In particular, deep learning is completed through case training. Because of its probability and statistical essence based on data, it must be affected by various uncertain factors such as error and noise, so it is difficult to match with accurate work. Figuratively speaking, the fitting surface obtained by this method will not be as "concise and clean" as the surface corresponding to accurate work.

The modern gold standard is scientific, mathematical and abstract. With the development of modernization and modern science and technology, including artificial intelligence technology, human activities are becoming stronger and deeper. Human thinking, behavior and society are becoming more and more mechanized, stylized, regular, accurate and standardized. Therefore, information simplification, algorithmic thinking, logical checking and other aspects reflecting the characteristics of science and technology are becoming more and more popular, and the status of common sense and intuition of "only meaning can be understood, not spoken" is declining, and the fields of humanistic cultivation, ethics and spiritual life are shrinking day by day. This is a major trend of alienation and mechanization of modern people, which gradually leads to the increasingly serious imbalance between the above two aspects. The development of old AI undoubtedly strengthens this trend, but the booming technology based on statistical data training represented by in-depth learning in recent years is not so spiritual in essence.

Old AI mostly corresponds to explicit knowledge that can express clearly and concisely. Due to the lack of induction and abstraction, Ann relies on supervised learning or (reinforcement learning, etc.) unsupervised learning, and the obtained knowledge is more like tacit knowledge or implicit knowledge that cannot be expressed concisely and clearly (such as the ability to play the piano). The western view of knowledge has always paid more attention to the explicit than the implicit. The great success of in-depth learning makes this "non concise explicit knowledge" proud. However, deep learning networks are often "able to do without expression and not good at planning", and it is generally believed that good planning is the minimum requirement of high intelligence. Old AI is hard to deal with soft problems. For hard problems, Ann is difficult to obtain the exact solution without guidance, but it may approach the exact solution with the increase of scale. One defense is that the exact solution of the so-called hard problem is originally the ideal solution of the hypothetical problem in the hypothetical case, not the actual solution of the actual problem in the actual case. In deep learning networks, the concepts of distributed representation are fuzzy "concept clouds" rather than "concept children" with clear boundaries, which is close to most actual situations.

As mentioned earlier, Western cultural traditions rely more on concepts such as "particles, elements, units, positioning and accuracy", while Chinese cultural traditions rely more on concepts such as "field, wave, gas, cloud, relationship, network and fuzziness". Therefore, relatively speaking, the old AI has more western characteristics, and the deep learning network has more oriental characteristics. The distributed representation of ANN is more inclined to Chinese and relational representation, in which the generation, correlation, change and differentiation of concepts and generalizations are more flexible, not as rigid as the positioning representation. This is also consistent with the above trend.

Moreover, the three generations of artificial intelligence research so far have all focused on the physiological or behavioral basis of intelligence, ignoring the social attribute of intelligence. It is people's social attribute needs that produce language, self-expression and self-consciousness. The language, words, knowledge, experience and even consciousness that human intelligence depends on are produced and developed in social life, all with the brand of culture. Sociality is a distinct symbol that distinguishes man from other animals and is of great significance to the composition of human intelligence. The example of wolf child is enough to illustrate the importance of sociality. In addition, artificial intelligence research, including deep learning, does not pay enough attention to the dynamics and time variability of intelligence, that is, the time dimension, which is more suitable for static problems.

On the other hand, due to its data-driven nature, the corresponding research and development lacks theoretical guidance, such as refining "intelligent elixir" by trial and error formula. Such common things often have a strong and universal framework, but they lack theoretical guidance and systematic response methods in the face of specific problems, resulting in developers becoming "alchemists".

S: after listening to what Mr. Li said, I still don't know whether artificial intelligence is a promising major.

Li: if you are interested in the real intelligence of "both ability and intelligence", be careful not to be fooled. If you are interested in "expertise", artificial intelligence (like many other majors) is a good major. Even so, we should make full ideological preparations. Recently, the research of artificial intelligence focuses on the application of big data deep learning to solve specific problems. The vast majority of such research (especially the work of novices in scientific research) is lack of creativity and boring, including calibrating a large amount of data, "alchemy" adjusted by "trial and error" formula, testing the results of various schemes, and so on. Such work will greatly consume your enthusiasm and impulse. The real research on basic methods must be rare.

In short, I think that in the past, artificial intelligence was only "capable but not intelligent". At present, artificial intelligence represented by deep learning of big data is probably "capable but not intelligent". I believe it can not achieve, let alone achieve "both capable and intelligent" in the near future. I can't even believe that abiotic inorganic machines can achieve "both capable and intelligent". In order to achieve "both ability and intelligence" and finally surpass human intelligence in the future, we must rely on the organic combination of life and machine after human transformation, and this research should have the premise: to improve moral cultivation in advance or at the same time, we will be both intelligent and moral.

  Leave a message
Email: *
Phone: *
Verification Code:
  Latest message
[ LIST | TOP ]