Skip to main content

Appendix D: Key Terminologies





Description of Key-Words  
  1. Curriculum
curriculum, as an idea, has its roots in the Latin word for race-course, explaining the curriculum as the course of deeds and experiences through which children become the adults they should be, for success in adult society. Furthermore, the curriculum encompasses the entire scope of formative deed and experience occurring in and out of school, and not only experiences occurring in school; experiences that are unplanned and undirected, and experiences intentionally directed for the purposeful formation of adult members of society. (cf. image at right.)
To Bobbitt, the curriculum is a social engineering arena. Per his cultural presumptions and social definitions, his curricular formulation has two notable features: (i) that scientific experts would best be qualified to and justified in designing curricula based upon their expert knowledge of what qualities are desirable in adult members of society, and which experiences would generate said qualities; and (ii) curriculum defined as the deeds-experiences the student ought to have to become the adult he or sheought to become.
Hence, he defined the curriculum as an ideal, rather than as the concrete reality of the deeds and experiences that form people to who and what they are.
Contemporary views of curriculum reject these features of Bobbitt's postulates, but retain the basis of curriculum as the course of experience(s) that forms human beings into persons. Personal formation via curricula is studied at the personal level and at the group level, i.e. cultures and societies (e.g. professional formation, academic discipline via historical experience). The formation of a group is reciprocal, with the formation of its individual participants.
Although it formally appeared in Bobbitt's definition, curriculum as a course of formative experience also pervades John Dewey's work (who disagreed with Bobbitt on important matters). Although Bobbitt's and Dewey's idealistic understanding of "curriculum" is different from current, restricted uses of the word, curriculum writers and researchers generally share it as common, substantive understanding of curriculum
  1. Curriculum means two things: (i) the range of courses from which students choose what subject matters to study, and (ii) a specific learning program. In the latter case, the curriculum collectively describes the teaching, learning, and assessment materials available for a given course of study.
Currently, a spiral curriculum (or tycoil curriculum) is promoted as allowing students to revisit a subject matter's content at the different levels of development of the subject matter being studied. The constructivist approach, of the tycoil curriculum, proposes that children learn best via active engagement with the educational environment, i.e. discovery learning. Crucial to the curriculum is the definition of the course objectives that usually are expressed as learning outcomes' and normally include the program's assessment strategy. These outcomes and assessments are grouped as units (or modules), and, therefore, the curriculum comprises a collection of such units, each, in turn, comprising a specialised, specific part of the curriculum. So, a typical curriculum includes communications, numeracy, information technology, and social skills units, with specific, specialized teaching of each.



  1. Critical Competencies
Batches of Competencies that frequently remained un-attended or unreached by learners. For example a learner usually tries to avoid questions related to Geometry in the class examination of Mathematics.

  1. CAL Package              The Computer Aided Learning program was initiated in the year 2002 to harness the potential of computer technology for education. The objectives of the program were to make learning a play, assessment a fun and equal knowledge for all students. During implementation, the objective of ‘equal knowledge for all’ got converted to ‘equal opportunity for all’. To this end, the Foundation created syllabus-based bi/trilingual multimedia contents. As a part of the program, the content along with a one-day orientation, was given to teachers. The program, in partnership with the respective State governments, covered approximately 16,000 Schools across 14 States in the country.  The program identified 6 factors critical to the success of computer aided learning. These are: 
·                     ·         Teacher involvement and leadership
·         Computer Aided Learning to be an integral part of teachers’ pedagogy and classroom processes and not a stand-alone activity
·         Dedicated Government resource and ownership
·         All time availability of the prescribed infrastructure and hardware
·         Availability of digital learning material of adequate quality and quantity
·         Continuous ongoing dialogue with teachers to explore the strengths of the available technology
These critical factors provided the ground for developing a demonstrable model of computer aided learning. The model took the form of a systematic research study on capability development of teachers and also to support them to use technology to meet the ends of learning. 


  1. Hypothesis
A hypothesis is a proposition that attempts to explain a set of facts in a unified way. It generally forms the basis of experiments designed to establish its plausibility. Simplicity, elegance, and consistency with previously established hypotheses or laws are also major factors in determining the acceptance of a hypothesis. Though a hypothesis can never be proven true (in fact, hypotheses generally leave some facts unexplained), it can sometimes be verified beyond reasonable doubt in the context of a particular theoretical approach. A scientific law is a hypothesis that is assumed to be universally true. A law has good predictive power, allowing a scientist (or engineer) to model a physical system and predict what will happen under various conditions. New hypotheses inconsistent with well-established laws are generally rejected, barring major changes to the approach. An example is the law of conservation of energy, which was firmly established but had to be qualified with the revolutionary advent of quantum mechanics and the uncertainty principle. A theory is a set of statements, including laws and hypotheses, that explains a group of observations or phenomena in terms of those laws and hypotheses. A theory thus accounts for a wider variety of events than a law does. Broad acceptance of a theory comes when it has been tested repeatedly on new data and been used to make accurate predictions. Although a theory generally contains hypotheses that are still open to revision, sometimes it is hard to know where the hypothesis ends and the law or theory begins. Albert Einstein's theory of relativity, for example, consists of statements that were originally considered to be hypotheses (and daring at that). But all the hypotheses of relativity have now achieved the authority of scientific laws, and Einstein's theory has supplanted Newton's laws of motion. In some cases, such as the germ theory of infectious disease, a theory becomes so completely accepted, it stops being referred to as a theory. The null hypothesis is an hypothesis about a population parameter. The purpose ofhypothesis testing is to test the viability of the null hypothesis in the light of experimental data. Depending on the data, the null hypothesis either will or will not be rejected as a viable possibility. 

Consider a researcher interested in whether the time to respond to a tone is affected by the consumption of alcohol. The null hypothesis is that µ1 - µ2 = 0 where µ1 is the mean time to respond after consuming alcohol and µ2 is the mean time to respond otherwise. Thus, the null hypothesis concerns the parameter µ1 - µ2and the null hypothesis is that the parameter equals zero. 

The null hypothesis is often the reverse of what the experimenter actually believes; it is put forward to allow the data to contradict it. In the experiment on the effect of alcohol, the experimenter probably expects alcohol to have a harmful effect. If the experimental data show a sufficiently large effect of alcohol, then the null hypothesis that alcohol has no effect can be rejected. 
  1. Competencies
Competence is a standardized requirement for an individual to properly perform a specific job. It encompasses a combination of knowledge, skills and behavior utilized to improve performance. More generally, competence is the state or quality of being adequately or well qualified, having the ability to perform a specific role.
For instance, management competency includes the traits of systems thinking and emotional intelligence, and skills in influence and negotiation. A person possesses a competence as long as the skills, abilities, and knowledge that constitute that competence are a part of them, enabling the person to perform effective action within a certain workplace environment. Therefore, one might not lose knowledge, a skill, or an ability, but still lose a competence if what is needed to do a job well changes.
Competence is also used to work with more general descriptions of the requirements of human beings in organizations and communities. Examples are educations and other organizations who want to have a general language to tell what a graduate of an education must be able to do in order to graduate or what a member of an organization is required to be able to do in order to be considered competent. An important detail of this approach is that all competences have to be action competences, which means you show in action, that you are competent. In the military the training systems for this kind of competence is called artificial experience, which is the basis for all simulators.
Within a specific organization or professional community, professional competence, is frequently valued. They are usually the same competencies you have to show in an interview for a job. But today there is another way of looking at it: that there are certain general areas of occupational competence required if you want to keep a job or get a promotion. For all organizations and communities there is a set of primary tasks that competent people have to contribute to all the time. For a university student, for example, the primary tasks could be:
  • Handling theory
  • Handling methods
  • Handling the information of the assignment
  • The four general areas of competence are:
  • Meaning Competence: You must be able to identify with the purpose of the organization or community and act from the preferred future in accordance with the values of the organization or community.
  • Relation Competence: You must be able to create and nurture connections to the stakeholders of the primary tasks.
  • Learning Competence: You must be able to create and look for situations that make it possible to experiment with the set of solutions that make it possible to complete the primary tasks and reflect on the experience.
  • Change Competence: You must be able to act in new ways when it will promote the purpose of the organization or community and make the preferred future come to life.

  1. Experimental Research Experimental Research Design and Analysis offers a rational approach to the quantitative methods of agricultural experiments. In its innovative presentation of the most commonly used experimental designs, this advanced text/reference discusses the logical reasons for selecting a particular design and shows how experimental results can be analyzed and interpreted. Real-world examples from different areas of agriculture are featured throughout the book to illustrate how practical issues of design and analysis are handled. 

  1. True Experiments For many true experimental designs, pretest-posttest designs are the preferred method to compare participant groups and measure the degree of change occurring as a result of treatments or interventions. Pretest-posttest designs grew from the simpler posttest only designs, and address some of the issues arising with assignment bias and the allocation of participants to groups. One example is education, where researchers want to monitor the effect of a new teaching method upon groups of children. Other areas include evaluating the effects of counseling, testing medical treatments, and measuring psychological constructs. The only stipulation is that the subjects must berandomly assigned to groups, in a true experimental design, to properly isolate and nullify any nuisance or confounding variables.

  1. Control Group  A scientific control group is an essential part of most research designs, allowing researchers to eliminate and isolate confounding variables and bias.  Normal biological variation, researcher bias and environmental variation are all factors that can skew data, so scientific control groups provide a baseline. As well as eliminating other variables, scientific control groups help the researcher to show that the experimental design is capable of generating results. A researcher must only measure one variable at a time, and using a scientific control group gives reliable baseline data to compare their results with. For example, a medical study will use two groups, giving one set of patients the real medicine and the other a placebo, in order to rule out the placebo effect. In this particular type of research, the experiment is double blind. Neither the doctors nor the patients are aware of which pill they are receiving, curbing potential research bias. In the social sciences, control groups are the most important part of the experiment, because it is practically impossible to eliminate all of the confounding variables and bias. For example, the placebo effect for medication is well documented, and the Hawthorne Effect is another influence where, if people know that they are the subjects of an experiment, they automatically change their behavior. There are two main types of control, positive and negative, both providing researchers with ways of increasing the statistical validity of their data.

  1. Subjects In any of these fields, ethical considerations and the wellbeing of the participants are the single most important consideration. The researcher must ensure that he causes no harm to the group, and it is generally accepted that honesty is the first parameter; the researcher must be open about purpose and intent.  The ethical considerations concerning permissions, consent and possible suffering are very similar to guidelines governing psychology researchers. These are the main points, but an in-depth analysis is available here. Wherever possible, the observer should strive to understand the particular community. This may be a knowledge of the language, or some experience with the culture.
One example would be studying sexuality – whilst the observer need not be gay or lesbian to understand those groups, it does help, giving them a unique insight into the unique difficulties faced by gay communities.
There must be no chance of causing psychological or physical suffering to the participants, and they should be treated as partners in research. A researcher using human research subjects must avoid the aloof approach required by quantitative methods.
It is vital that the social science subjects are willing participants in the research, and are not coerced or induced into participating through false promises or benefits.
The social science subjects should be fully informed of the research and the possible implications should be transmitted through a pre-experimental briefing. Verbal and written information, in a language that they understand, should always be sought.
The participants should be fully informed of how their information will be used, how anonymous the information will be, and for how long it will be stored.
The participant should be able to withdraw at any stage during the research, and may also ask that all of their information, including film, photographs and testimonials be removed.
On occasion, the exact nature of the research cannot be revealed to the social science subjects, in case it influences the findings. In this case, the work must be constantly overseen by an independent ethical review panel and peers. In addition, the right to withdraw consent must be maintained.
These ethics are extremely important for maintaining the integrity of participation. It is very easy for researchers using social science subjects to cross the line and cause lasting damage to a group or community.
Historically, the use of ethics have been sloppy in some social science experiments, such as the use of deception in the milgram study, the stanford prison experiment, the bobo doll experiment or theasch experiment. These studies would probably have been disallowed today. This is especially important with the number of documentaries following groups or tribes, because it is very easy to stray into attempting to edit unfavorably and sensationalizing footage for ratings.  
  1. Pretest- Posttest Research design
Fig: A – Pretest – Posttest Design


This design allows researchers to compare the final posttest results between the two groups, giving them an idea of the overall effectiveness of the intervention or treatment. (C)
The researcher can see how both groups changed from pretest to posttest, whether one, both or neither improved over time. If the control group also showed a significant improvement, then the researcher must attempt to uncover the reasons behind this. (A and A1)
The researchers can compare the scores in the two pretest groups, to ensure that the randomization process was effective. (B)
These checks evaluate the efficiency of the randomization process and also determine whether the group given the treatment showed a significant difference.


 
Pretest-posttest designs are an expansion of the posttest only design with nonequivalent groups, one of the simplest methods of testing the effectiveness of an intervention.
In this design, which uses two groups, one group is given the treatment and the results are gathered at the end. The control group receives no treatment, over the same period of time, but undergoes exactly the same tests.
Statistical analysis can then determine if the intervention had a significant effect. One common example of this is in medicine; one group is given a medicine, whereas the control group is given none, and this allows the researchers to determine if the drug really works. This type of design, whilst commonly using two groups, can be slightly more complex. For example, if different dosages of a medicine are tested, the design can be based around multiple groups.
Whilst this posttest only design does find many uses, it is limited in scope and contains many threats to validity. It is very poor at guarding againstassignment bias, because the researcher knows nothing about the individual differences within the control group and how they may have affected the outcome. Even with randomization of the initial groups, this failure to address assignment bias means that the statistical power is weak.
The results of such a study will always be limited in scope and, resources permitting; most researchers use a more robust design, of which pretest-posttest designs are one. The posttest only design with non-equivalent groups is usually reserved for experiments performed after the fact, such as a medical researcher wishing to observe the effect of a medicine that has already been administered.
  1. Experimental Research Design
True experimental design is regarded as the most accurate form of experimental research, in that it tries to prove or disprove a hypothesis mathematically, with statistical analysis.
For some of the physical sciences, such as physics, chemistry and geology, they are standard and commonly used. For social sciences, psychology and biology, they can be a little more difficult to set up.

The independent variable, also known as the manipulated variable, lies at the heart of any quantitative experimental design.
This is the factor manipulated by the researcher, and it produces one or more results, known as dependent variables. There are often not more than one or two independent variables tested in an experiment, otherwise it is difficult to determine the influence of each upon the final results.
There may be more than several dependent variables, because manipulating the independent can influence many different things.
For example, an experiment to test the effects of a certain fertilizer, upon plant growth, could measure height, number of fruits and the average weight of the fruit produced. All of these are valid analyzable factors, arising from the manipulation of one independent variable, the amount of fertilizer.
The term independent variable is often a source of confusion; many people assume that the name means that the variable is independent of any manipulation.
The name arises because the variable is isolated from any other factor, allowing experimental manipulation to establish analyzable results.
Some research papers appear to give results manipulating more than one experimental variable, but this is usually a false impression.
Each manipulated variable is likely to be an experiment in itself, one area where the words ‘experiment’ and ‘research’ differ. It is simply more convenient for the researcher to bundle them into one paper, and discuss the overall results.
The botanical researcher above might also study the effects of temperature, or the amount of water on growth, but these must be performed as discrete experiments, with only the conclusion and discussion amalgamated at the end.
  1. Chi Square Test
The chi-square is one of the most popular statistics because it is easy to calculate and interpret. There are two kinds of chi-square tests. The first is called a one-way analysis, and the second is called a two-way analysis. The purpose of both is to determine whether the observed frequencies (counts) markedly differ from the frequencies that we would expect by chance.
The observed cell frequencies are organized in rows and columns like a spreadsheet. This table of observed cell frequencies is called a contingency table, and the chi-square test if part of acontingency table analysis.
The chi-square statistic is the sum of the contributions from each of the individual cells. Every cell in a table contributes something to the overall chi-square statistic. If a given cell differs markedly from the expected frequency, then the contribution of that cell to the overall chi-square is large. If a cell is close to the expected frequency for that cell, then the contribution of that cell to the overall chi-square is low. A large chi-square statistic indicates that somewhere in the table, the observed frequencies differ markedly from the expected frequencies. It does not tell which cell (or cells) are causing the high chi-square...only that they are there. When a chi-square is high, you must visually examine the table to determine which cell(s) are responsible.
When there are exactly two rows and two columns, the chi-square statistic becomes inaccurate, and Yate's correction for continuity is usually applied. Statistics Calculator will automatically use Yate's correction for two-by-two tables when the expected frequency of any cell is less than 5 or the total N is less than 50.
If there is only one column or one row (a one-way chi-square test), the degrees of freedom is the number of cells minus one. For a two way chi-square, the degrees of freedom is the number or rows minus one times the number of columns minus one.
Using the chi-square statistic and its associated degrees of freedom, the software reports the probability that the differences between the observed and expected frequencies occurred by chance. Generally, a probability of .05 or less is considered to be a significant difference.
A standard spreadsheet interface is used to enter the counts for each cell. After you've finished entering the data, the program will print the chi-square, degrees of freedom and probability of chance.

In a 2X2 table (four Cells ) there is a simple formula that eliminates the need to calculate the theoretical frequencies for each cell-


Table 2: Achievement – Non-achievement Data
Competency Achievements
Experimental Group
Control Group
Total No of Students
Achievers
A
B
A+B
Non-achievers
C
D
C+D
Total
A+C
B+D
N
Degree of Freedom = (Rows – 1)(Column – 1) = 1

X   2  =
N[|AD-BC|] 2
(A+B)(C+D)(A+C)(B+D)

For each levels of significance there exists a critical value of chi-sqare. For rejection of the Null Hypothesis, the calculated value of chi-square must equal or exceed the critical value depicted in the table of Critical Values( Table 3)



Table 3: Critical Value of Chi-Square
Particulars
Critical Values at different Levels of Significance and 1 degree of freedom
Levels of Significance
0.05
0.01
Chi Square Value
3.84
6.64



In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.
Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate is equal to the number of independent scores that go into the estimate minus the number of parameters estimated as intermediate steps in the estimation of the parameter itself.
The number of degrees of freedom is the number of independent observations in a sample of data that are available to estimate a parameter of the population from which that sample is drawn. For example, if we have two observations, when calculating the mean we have two independent observations; however, when calculating the variance, we have only one independent observation, since the two observations are equally distant from the mean.

  1. Computer
Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". Theembedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous.
The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them fromcalculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a mobile phone to asupercomputer are all able to perform the same computational tasks, given enough time and storage capacity.
 In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine. Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.
In the late 1880s, Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..." To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with theTuring machine. Of his role in the modern computer, Time Magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine."  The inventor of the program-controlled computer was Konrad Zuse, who built the first working computer in 1941 and later in 1955 the first computer based on magnetic storage.George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.

A desktop computer is a personal computer (PC) in a form intended for regular use at a single location, as opposed to a mobile laptop or portable computer. Prior to the wide spread of microprocessors, a computer that could fit on a desk was considered remarkably small.
Fig 2: A Desktop Computer
Desktop computers come in a variety of types ranging from large vertical tower cases to small form factor models that can be tucked behind an LCD monitor. "Desktop" can also indicate a horizontally-oriented computer case usually intended to have the display screen placed on top to save space on the desk top. Most modern desktop computers have separate screens and keyboards. Tower cases are desktop cases in the former sense, though not in the latter. Cases intended for home theater PC systems are usually considered to be desktop cases in both senses, regardless of orientation and placement.


  1. Primary Reinforcement
In operant conditioning, reinforcement occurs when an event following a response causes an increase in the probability of that response occurring in the future. Response strength can be assessed by measures such as the frequency with which the response is made (for example, a pigeon may peck a key more times in the session), or the speed with which it is made (for example, a rat may run a maze faster). The environment change contingent upon the response is called a reinforcer. A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival. Examples of primary reinforcers include sleep, food, air, water, and sex. Other primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another abhors it. Or one person may eat lots of food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them.

  1. Blogging Many blogs provide commentary or news on a particular subject; others function as more personal online diaries. A typical blog combines text, images, and links to other blogs, Web pages, and other media related to its topic. The ability for readers to leave comments in an interactive format is an important part of many blogs. Most blogs are primarily textual, although some focus on art (Art blog), photographs (photoblog), videos (Video blogging), music (MP3 blog), and audio (podcasting). Microblogging is another type of blogging, featuring very short posts.Many bloggers, particularly those engaged in participatory journalism, differentiate themselves from the mainstream media, while others are members of that media working through a different channel. Some institutions see blogging as a means of "getting around the filter" and pushing messages directly to the public. Some critics worry that bloggers respect neithercopyright nor the role of the mass media in presenting society with credible news. Bloggers and other contributors to user-generated content are behind Time magazine naming their 2006person of the year as "you". Many mainstream journalists, meanwhile, write their own blogs — well over 300, according to CyberJournalist.net's J-blog list. The first known use of a blog on a news site was in August 1998, when Jonathan Dube of The Charlotte Observer published one chronicling Hurricane Bonnie. Some bloggers have moved over to other media. The following bloggers (and others) have appeared on radio and television: Duncan Black (known widely by his pseudonym, Atrios), Glenn Reynolds (Instapundit), Markos Moulitsas Zúniga (Daily Kos), Alex Steffen (Worldchanging) and Ana Marie Cox (Wonkette). In counterpoint, Hugh Hewitt exemplifies a mass-media personality who has moved in the other direction, adding to his reach in "old media" by being an influential blogger. Equally many established authors, for example Mitzi Szereto have started using Blogs to not only update fans on their current works but also to expand into new areas of writing. Blogs have also had an influence on minority languages, bringing together scattered speakers and learners; this is particularly so with blogs in Gaelic languages. Minority language publishing (which may lack economic feasibility) can find its audience through inexpensive blogging. There are many examples of bloggers who have published books based on their blogs, e.g., Salam Pax, Ellen Simonetti, Jessica Cutler, ScrappleFace. Blog-based books have been given the name blook. A prize for the best blog-based book was initiated in 2005, the Lulu Blooker Prize. However, success has been elusive offline, with many of these books not selling as well as their blogs. Only blogger Tucker Max made the New York Times Bestseller List. The book based on Julie Powell's blog "The Julie/Julia Project" was made into the film Julie & Julia, apparently the first to do so.
An edublog is a blog written by someone with a stake in education. Examples might include blogs written by or for teachers, blogs maintained for the purpose of classroom instruction, or blogs written about educational policy. The collection of these blogs is called the edublogosphere by some, in keeping with the larger blogosphere, although that label is not necessarily universally agreed upon. (Others refer to the community or collection of blogs and bloggers as the edusphere.) Similarly, educators who blog are sometimes called edubloggers. Communities of edubloggers occasionally gather for meetups or unconference sessions organized using a wiki at edubloggercon.com


  1. Teacher
In education, a teacher is a person who provides schooling for others. A teacher who facilitates education for an individual student may also be described as a personal tutor. The role of teacher is often formal and ongoing, carried out by way of occupation or profession at a school or other place of formal education. In many countries, a person who wishes to become a teacher at state-funded schools must first obtain professional qualifications or credentials from a university or college. These professional qualifications may include the study of pedagogy, the science of teaching. Teachers may use a lesson plan to facilitate student learning, providing a course of study which covers a standardizedcurriculum. A teacher's role may vary between cultures. Teachers teach literacy and numeracy, or some of the other school subjects. Other teachers may provide instruction in craftsmanship or vocational training, the Arts, religion or spirituality, civics, community roles, or life skills. In some countries, formal education can take place through home schooling.
A teacher's professional duties may extend beyond formal teaching. Outside of the classroom teachers may accompany students on field trips, supervise study halls, help with the organization of school functions, and serve as supervisors for extracurricular activities. In some education systems, teachers may have responsibility for student discipline.
Around the world teachers are often required to obtain specialized education, knowledge, codes of ethics and internal monitoring.
There are a variety of bodies designed to instill, preserve and update the knowledge and professional standing of teachers. Around the world many governments operate teacher's colleges, which are generally established to serve and protect the public interest through certifying, governing and enforcing the standards of practice for the teaching profession.
The functions of the teacher's colleges may include setting out clear standards of practice, providing for the ongoing education of teachers, investigating complaints involving members, conducting hearings into allegations of professional misconduct and taking appropriate disciplinary action and accrediting teacher education programs. In many situations teachers inpublicly funded schools must be members in good standing with the college, and private schools may also require their teachers to be college members. In other areas these roles may belong to the State Board of Education, the Superintendent of Public Instruction, the State Education Agency or other governmental bodies. In still other areas Teaching Unions may be responsible for some or all of these duties.
  1. Teaching Aid
19.   Computer-aided Assessment (also but less commonly referred to as E-assessment), ranging from automated multiple-choice tests to more sophisticated systems is becoming increasingly common. With some systems, feedback can be geared towards a student's specific mistakes or the computer can navigate the student through a series of questions adapting to what the student appears to have learned or not learned. The best examples follow a Formative Assessment structure and are called "Online Formative Assessment". This involves making an initial formative assessment by sifting out the incorrect answers. The author/teacher will then explain what the pupil should have done with each question. It will then give the pupil at least one practice at each slight variation of sifted out questions. This is the formative learning stage. The next stage is to make a Summative Assessment by a new set of questions only covering the topics previously taught. The term learning design has sometimes come to refer to the type of activity enabled by software such as the open-source system LAMS which supports sequences of activities that can be both adaptive and collaborative. The IMS Learning Design specification is intended as a standard format for learning designs, and IMS LD Level A is supported in LAMS V2.elearning has been replacing the traditional settings due to its cost effectiveness.
20.   E-Learning pioneer Bernard Luskin argues that the "E" must be understood to have broad meaning if e-Learning is to be effective. Luskin says that the "e" should be interpreted to mean exciting, energetic, enthusiastic, emotional, extended, excellent, and educational in addition to "electronic" that is a traditional national interpretation. This broader interpretation allows for 21st century applications and brings learning and media psychology into the equation In higher education especially, the increasing tendency is to create a Virtual Learning Environment (VLE) (which is sometimes combined with a Management Information System (MIS) to create a Managed Learning Environment) in which all aspects of a course are handled through a consistent user interface standard throughout the institution. A growing number of physical universities, as well as newer online-only colleges, have begun to offer a select set of academic degree and certificate programs via the Internet at a wide range of levels and in a wide range of disciplines. While some programs require students to attend some campus classes or orientations, many are delivered completely online. In addition, several universities offer online student support services, such as online advising and registration, e-counseling, online textbook purchase, student governments and student newspapers. e-Learning can also refer to educational web sites such as those offering learning scenarios, worksheets and interactive exercises for children. The term is also used extensively in the business sector where it generally refers to cost-effective online training. The recent trend in the e-Learning sector is screen-casting. There are many screencasting tools available but the latest buzz is all about the web based screencasting tools which allow the users to create screencasts directly from their browser and make the video available online so that the viewers can stream the video directly. The advantage of such tools is that it gives the presenter the ability to show his ideas and flow of thoughts rather than simply explain them, which may be more confusing when delivered via simple text instructions. With the combination of video and audio, the expert can mimic the one on one experience of the classroom and deliver clear, complete instructions. From the learners point of view this provides the ability to pause and rewind and gives the learner the advantage to move at their own pace, something a classroom cannot always offer. One such example of e-Learning platform based on screencasts is YoHelpOnline. Communication technologies are generally categorized as asynchronous or synchronous. Asynchronous activities use technologies such as blogs, wikis, and discussion boards. The idea here is that participants may engage in the exchange of ideas or information without the dependency of other participants involvement at the same time. Electronic mail (Email) is alsoasynchronous in that mail can be sent or received without having both the participants’ involvement at the same time. Synchronous activities involve the exchange of ideas and information with one or more participants during the same period of time. A face to face discussion is an example of synchronous communications. Synchronous activities occur with all participants joining in at once, as with an online chat session or a virtual classroom or meeting. Virtual classrooms and meetings can often use a mix of communication technologies. In many models, the writing community and the communication channels relate with the E-learning and the M-learning communities. Both the communities provide a general overview of the basic learning models and the activities required for the participants to join the learning sessions across the virtual classroom or even across standard classrooms enabled by technology. Many activities, essential for the learners in these environments, require frequent chat sessions in the form of virtual classrooms and/or blog meetings.



  1. A screen-cast is a digital recording of computer screen output, also known as a video screen capture, often containing audio narration. Although the term screencast dates from 2004, products such as Lotus ScreenCam were used as early as 1994.
  2. A blended learning approach can combine face-to-face instruction with computer-mediated instruction. It also applies science or IT activities with the assistance of educational technologies using computer, cellular or   i-Phones,  Satellite  television channels,  videoconferencing and other emerging electronic media. Learners and teachers work together to improve the quality of learning and teaching, the ultimate aim of blended learning being to provide realistic practical opportunities for learners and teachers to make learning independent, useful, sustainable and ever growing. 





Popular posts from this blog

Swami Vivekananda

K. SUNDARAMA IYER speaks about Swami Vivekananda …………… I MUST first mention the name of Mr. M.C. Alasinga Perumal, late headmaster of the High School attached to Pacheyappa's College. From the time when the Swami first came to Madras in December 1892 after his visit to Kanyakumari and Rameswaram, he attached himself with adoring love and never-failing enthusiasm to the Swami's person and to his ministry in the world in all its phases and details — an adhesion and service to the Great Master which, to me at least, has always seemed a thing of beauty and brought to me a consolation and joy in many a dark hour of my heart's sinkings. That our degenerate Hindu society could still produce one who had in his nature so pure and perfect a passion of reverence and tender affection towards the Swami's prophetic soul was to me a discovery, and I have seen nothing like it in this southern peninsula at least of the Indian continent. He was the life and soul of the work of all kinds

Childhood of Swamiji

Swami Vivekananda, the great soul loved and revered in East and West alike as the rejuvenator of Hinduism in India and the preacher of its eternal truths abroad, was born at 6:33, a few minutes before sunrise, on Monday, January 12, 1863. It was the day of the great Hindu festival Makarasamkranti, when special worship is offered to the Ganga by millions of devotees. Thus the future Vivekananda first drew breath when the air above the sacred river not far from the house was reverberating with the prayers, worship, and religious music of thousands of Hindu men and women. Before Vivekananda was born, his mother, like many other pious Hindu mothers, had observed religious vows, fasted, and prayed so that she might be blessed with a son who would do honour to the family. She requested a relative who was living in Varanasi to offer special worship to the Vireswara Siva of that holy place and seek His blessings; for Siva, the great god of renunciation, dominated her thought. One night she d