pytania do obrony

  1. Discuss some of the newer forms of English being spoken in Britain, and what they tell us about contemporary British society.

There are 2 aspects of contemporary English being spoken in Britain. Both of them stem from Cockney i.e. an exemplary form of working class speech in London.

Cockney: traditional dialect of working class Londoners (now being replaced by MLE also known as ‘Jafaican’)

MLE: latest in a long line of ‘stigmatised’ dialects in a class/accent conscious society, a ‘multiethnolect’ emerging from a ‘feature pool’ derived from:

1. Estuary English: (spoken in South East England, especially along the River Thames), (most popular among young people) – an informalised version of Standard English with elements of Cockney.

2. Mockney: a stereotypical Mockney speaker comes from an upper-middle class background. Mock Cockney used as performance – rougher, casual, informal, vulgarity, anti-poshness – often to make an impression of being like an average person, to create the feeling of connectedness with normal people.

It is an affectation sometimes adopted for aesthetic or theatric purposes, other times just to sound “cool”, generate street credibility or give the false impression that the speaker rose from humble beginnings and became prominent through some innate talent rather than the education, contacts and other advantages a privileged background tends to bring.

in traditional Cockney and also Estuary English prefer to change ‘th’ at the beginning of a word to something which sounds like ‘f’ – so instead of three they pronounce it free or instead of thought they say fought – just like Poles

Another characteristic is an extreme using of glottal stops: pronouncing butter without tt or wet without t or something called L vocalization when milk sounds like miwk.

What is more, they seem to use more ‘r’ – not as much as in America but definitely some. It is probably hard to understand those new varieties.

Also some changes in grammar may be observed easily, for example double negatives, a lot of question tags or using ‘me’ in place of ‘my’.

In the field of vocabulary something called rhyming slang is present. It means that a word is replaced by a pair of words, the second of which rhymes with the one replaced, e.g. wife = trouble and strife, feet = plates of meat, mouth = north and south

All these changes imply that the Brits nowadays would like to stand out and separate from the tradition of posh speaking or RP that is BBC English or Queen’s English.

Accent is an important factor in British society: gives information about the region, class, occupation of a speaker.

2. What is the significance of the Macpherson Report?

Stephen Lawrence was a black British man, who was murdered in a racially motivated attack while waiting for a bus on the evening of 22 April 1993. The case became one of the highest profile racial killings in the UK history, its fallout included profound cultural changes to attitudes on racism and the police, and to the law and police practice, and the partial revocation of double jeopardy laws, before two of the perpetrators were convicted almost 20 years later in 2012.

Due to public pressure an inquiry into the murder of Stephen Lawrence was agreed in 1997. Sir William Macpherson, retired high court judge led this inquiry into the conduct of the police during this murder investigation. An extensive report which is known to be The Macpherson Report analyses institutional and individual behaviour of the police during the murder investigation of Stephen Lawrence.

Whom it concerns ?

This report criticises the Metropolitan Police (Police force in London) and concludes that the police did not carry out the investigation in an appropriate manner. The report labels the Metropolitan police force to be institutionally racist.

What were the effects?

The Report, which was published in 1999, highlights the Metropolitan Police’s key areas of failure during the investigation: 

"Institutional Racism"

The Macpherson Report delivered a damning assessment of the "institutional racism" within the Metropolitan police and policing generally.

It made 70 recommendations many aimed specifically at improving police attitudes to racism and stressed the importance of a rapid increase in the numbers of black and Asian police officers.

The government pledged to increase the number of officers from minority ethnic groups from around 2,500 to 8,000 by 2009.

Macpherson Report in a nutshell - the things you should know:

2. Races

The UK has a history of small-scale non-white immigration, with Liverpool having the oldest Black population in the country dating back to at least the 1730s during the period of the African slave trade,[4] and the oldest Chinese community in Europe, dating to the arrival of Chinese seamen in the 19th century.[5]

Since 1948 substantial immigration from Africa, the Caribbean and South Asia has been a legacy of ties forged by the British Empire.[6] Migration from new EU member states inCentral and Eastern Europe since 2004 has resulted in growth in these population groups.[7]

Sociologist Steven Vertovec argues that whereas "Britain's immigrant and ethnic minority population has conventionally been characterized by large, well-organized African-Caribbean and South Asian communities of citizens originally from Commonwealth countries or formerly colonial territories", more recently the level of diversity of the population has increased significantly, as a result of "an increased number of new, small and scattered, multiple-origin, transnationally connected, socio-economically differentiated and legally stratified immigrants".[8]

The 2001 UK Census classified ethnicity into several groups: White, Black, Asian, Mixed, Chinese and Other.

Ethnic Groups - 2011 Census data

Multiculturalism and integration

With considerable migration after the Second World War making the UK an increasingly ethnically and racially diverse state, race relations policies have been developed that broadly reflect the principles of multiculturalism, although there is no official national commitment to multiculturalism.[20][21][22] This model has faced criticism on the grounds that it has failed to sufficiently promote social integration,[23][24][25] although some commentators have questioned the dichotomy between diversity and integration that this critique presumes.[24] It has been argued that the UK government has since 2001, moved away from policy characterised by multiculturalism and towards the assimilation of minority communities.[26]

Attitudes to multiculturalism

A poll conducted by MORI for the BBC in 2005 found that 62 per cent of respondents agreed that multiculturalism made Britain a better place to live, compared to 32 percent who saw it as a threat.[27] Ipsos MORI data from 2008 by contrast, showed that only 30 per cent saw multiculturalism as making Britain a better place to live, with 38 per cent seeing it as a threat. 41 per cent of respondents to the 2008 poll favoured the development of a shared identity over the celebration of diverse values and cultures, with 27 per cent favouring the latter and 30 per cent undecided.[28]

A study conducted for the Commission for Racial Equality (CRE) in 2005 found that in England, the majority of ethnic minority participants called themselves British, whereas indigenous English participants said English first and British second. In Wales andScotland the majority of white and ethnic minority participants said Welsh or Scottish first and British second, although crucially they saw no incompatibility between the two identities.[29] Other research conducted for the CRE found that white participants felt that there was a threat to Britishness from large-scale immigration, the claims that they perceived ethnic minorities made on the welfare state, a rise in moral pluralism and perceivedpolitical correctness. Much of this frustration was vented at Muslims rather than minorities in general. Muslim participants in the study reported feeling victimised and stated that they felt that they were being asked to choose between Muslim and British identities, whereas they saw it possible to be both.[30]

Immigrations:

Modern

Indians

By the mid-19th century, there were at least 40,000 Indian seamen, diplomats, scholars, soldiers, officials, tourists, businessmen and students in Great Britain.[8] In 1855 more than 25,000 of these were lascar seamen.[9][10] By the late 19th and early 20th centuries, there were around 70,000 South Asians in Britain,[11] 51,616 of whom were lascarseamen at the beginning of the First World War.[12]

Africans

During the 18th century, a substantial population of black people, thought to number about 15,000 by mid-century, were brought to Britain initially largely as the captain's share of the cargo of transatlantic slave ships. Many of these people became servants in aristocratic households and are frequently depicted in contemporary portraits of the family – often depicted in a similar manner to family pets. Many black people became part of the urban poor and were often depicted in the caricatures and cartoons of William Hogarth, but others attained highly respected positions in society, e.g. Ignatius Sancho and Francis Barber – a servant to Dr Samuel Johnson who became a beneficiary of his will. These ships stopped carrying black people to Britain after it banned slave trading in 1807.

Following the British defeat in the American War of Independence over 1,100 Black Loyalist troops who had fought on the losing side were transported to Britain, but they mostly ended up destitute on London's streets and were viewed as a social problem. The Committee for the Relief of the Black Poor was formed. They distributed relief and helped the men to go overseas, some to what remained of British North America. In 1786, the committee funded an expedition of 280 black men, forty black women and seventy white wives and girlfriends to Sierra Leone. The settlement failed and within two years all but sixty of the migrants had died.[13]

Germans

Throughout the 19th century a substantial population of German immigrants built up in Britain, numbering 28,644 in 1861. London held around half of this population, and other sizeable communities existed in Manchester, Bradford and elsewhere. The German immigrant community was the largest group until 1891, when it became second only to Russian Jews. There was a mixture of classes and religious groupings, and a flourishing culture built up, with the growth of middle and working class clubs. Waiters and clerks were two main occupations, and many who worked in these professions went on to become restaurant owners and businessmen, to a considerable extent.[14] This community maintained its size until the First World War, when public anti-German feeling became very prominent and the Government enacted a policy of forced internment and repatriation. The community in 1911 had reached 53,324, but fell to just over 20,000 after the war.[15]

Russian Jews

England has had small Jewish communities for many centuries, subject to occasional expulsions, but British Jews numbered fewer than 10,000 at the start of the 19th century. After 1881 Russian Jews suffered bitter persecutions, and British Jews led fund-raising to enable their Russian co-religionists to emigrate to the United States. However, out of some 2,000,000 who left Russia by 1914, around 120,000 settled permanently in Britain. One of the main concentrations was the same Spitalfields area where Huguenots had earlier congregated. Immigration was reduced by the Aliens Act 1905 and virtually curtailed by the 1914 Aliens Restriction Act.[16] In addition to those Russian Jews who settled permanently in the UK an estimated 500,000 Eastern European Jews transmigrated through British ports between 1881 and 1924.[17] Most were bound for the United States and others migrated to Canada, South Africa, Latin America and the Antipodes. [18]

After World War II

Since 1945, immigration to the United Kingdom under British nationality law has been substantial, in particular from the Republic of Ireland and from the former colonies and territories of the British Empire such as IndiaBangladeshPakistan, the CaribbeanSouth AfricaKenya and Hong Kong. Other immigrants have come as asylum seekers, seeking protection as refugees under the United Nations 1951 Refugee Convention, or from member states of the European Union, exercising one of the European Union's Four Freedoms.[1]

About 70% of the population increase between the 2001 and 2011 censuses was due to foreign-born immigration. 7.5 million people (11.9 percent of the population at the time) were born abroad, although the census gives no indication of their immigration status or intended length of stay.[2]

Provisional figures show that in 2013, 526,000 people arrived to live in the UK whilst 314,000 left, meaning that net inward migration was 212,000. The number of people immigrating to the UK increased between 2012 and 2013 by 28,000 whereas the number emigrating fell by 7,000.[3]

From April 2013 to April 2014, a total of 560,000 immigrants arrived in the UK, including 81,000 British citizens and 214,000 from other parts of the EU. An estimated 317,000 people left, including 131,000 British citizens and 83,000 other EU citizens. The top countries represented in terms of arrivals were: China, India, Poland, United States, and Australia.[4]

In 2006, there were 149,035 applications for British citizenship, 32 percent fewer than in 2005. The number of people granted citizenship during 2006 was 154,095, 5 percent fewer than in 2005. The largest groups of people granted British citizenship were from India, Pakistan, Somalia, and the Philippines.[5] In 2006, 134,430 people were granted settlement in the UK, a drop of 25 per cent on 2005.[6]

In comparison, migration to and from Central and Eastern Europe has increased since 2004 with the accession to the European Union of eight Central and Eastern European states, since there is free movement of labour within the EU.[7] In 2008, the UK government began phasing in a new points-based immigration system for people from outside of the European Economic Area.

Post-war immigration (1945–1983)[edit]

Following the end of World War II, substantial groups of people from Soviet-controlled territories settled in Britain, particularlyPoles and Ukrainians. The UK recruited displaced people as so-called European Volunteer Workers in order to provide labour to industries that were required in order to aid economic recovery after the war.[21] In the 1951 census, the Polish-born population of the UK numbered some 162,339, up from 44,642 in 1931.[22][23]

Indians began arriving in the UK in large numbers shortly after their country gained independence in 1947. More than 60,000 arrived before 1955, many of whom drove buses, or worked in foundries or textile factories. Later arrivals opened corner shops or ran post offices. The flow of Indian immigrants peaked between 1965 and 1972, boosted in particular by Idi Amin'ssudden decision to expel all 50,000 Gujarati Indians from Uganda. Around 30,000 Ugandan Asians migrated to the UK.[24]

There was also an influx of refugees from Hungary, following the crushing of the 1956 Hungarian revolution, numbering 20,990.[25]

Until the Commonwealth Immigrants Act 1962, all Commonwealth citizens could enter and stay in the UK without any restriction. The Act made Citizens of the United Kingdom and Colonies (CUKCs), whose passports were not directly issued by the UK Government (i.e. passports issued by the Governor of a colony or by the Commander of a British protectorate), subject to immigration control.

Enoch Powell gave the famous "Rivers of Blood" speech on 20 April 1968 in which he warned his audience of what he believed would be the consequences of continued unchecked immigration from the Commonwealth to Britain. Opposition Leader Edward Heath sacked Powell from his Shadow Cabinet the day after the speech, and he never held another senior political post. Powell received 110,000 letters- only 2,300 disapproving-[26] as a result of the speech and a Gallup poll at the end of April showed that 74% of those asked agreed with his speech.[citation needed] After the 'Rivers of Blood' speech, Powell was transformed into a national public figure and won huge support across Britain.[citation needed] Three days after the speech, on 23 April, as the Race Relations Bill was being debated in the House of Commons, around 2,000 dockers walked off the job to march on Westminster protesting against Powell's dismissal,[27] and the next day 400 meat porters from Smithfield market handed in a 92-page petition in support of Powell.[28]

By 1972, only holders of work permits, or people with parents or grandparents born in the UK could gain entry – significantly reducing primary immigration from Commonwealth countries.[17]

Contemporary immigration (1983 onwards)[edit]

The British Nationality Act 1981, which was enacted in 1983, distinguishes between British citizen or British Overseas Territories citizen. It also made a distinction between nationality by descent and nationality other than by descent. Citizens by descent cannot automatically pass on British nationality to a child born outside the United Kingdom or its Overseas Territories (though in some situations the child can be registered as a citizen).

Immigration officers have to be satisfied about a person's nationality and identity and entry could be refused if they were not satisfied.[29]

During the 1980s and 1990s, the civil war in Somalia lead to a large number of Somali immigrants, comprising the majority of the current Somali population in the UK. In the late 1980s, most of these early migrants were granted asylum, while those arriving later in the 1990s more often obtained temporary status. There has also been some secondary migration of Somalis to the UK from the Netherlands and Scandinavia. The main driving forces behind this secondary migration included a desire to reunite with family and friends and for better employment opportunities.[30]

Non-European immigration rose significantly during the period from 1997, not least because of the government's abolition of the primary purpose rule in June 1997.[31] This change made it easier for UK residents to bring foreign spouses into the country.

The former government advisor Andrew Neather in the Evening Standard stated that the deliberate policy of ministers from late 2000 until early 2008 was to open up the UK to mass migration.[32][33]

European Union[edit]

One of the Four Freedoms of the European Union, of which the United Kingdom is a member, is the right to the free movement of people as codified in the Directive 2004/38/ECand the EEA Regulations (UK).

Since the expansion of the EU on 1 May 2004, the UK has accepted immigrants from Central and Eastern EuropeMalta and Cyprus, although the substantial Maltese andGreek-Cypriots and Turkish-Cypriot communities were established earlier through their Commonwealth connection. There are restrictions on the benefits that members of eight of these accession countries (A8 nationals) can claim, which are covered by the Worker Registration Scheme.[34] Many other European Union member states exercised their right to temporary immigration control (which ended in 2011)[35] over entrants from these accession states,[36] but some subsequently removed these restrictions ahead of the 2011 deadline.[37]

Research conducted by the Migration Policy Institute for the Equality and Human Rights Commission suggests that, between May 2004 and September 2009, 1.5 million workers migrated from the new EU member states to the UK, but that many have returned home, with the result that the number of nationals of the new member states in the UK increased by some 700,000 over the same period.[38][39] Migration from Poland in particular has become temporary and circular in nature.[40] In 2009, for the first time since the enlargement, more nationals of the eight Central and Eastern European states that joined the EU in 2004 left the UK than arrived.[41] Research commissioned by the Regeneration and Economic Development Analysis Expert Panel suggested migrant workers leaving the UK due to the recession are likely to return in the future and cited evidence of "strong links between initial temporary migration and intended permanent migration".[42]

The Government announced that the same rules would not apply to nationals of Romania and Bulgaria (A2 nationals) when those countries acceded to the EU in 2007. Instead, restrictions were put in place to limit migration to students, the self-employed, highly skilled migrants and food and agricultural workers.[43]

In February 2011, the Leader of the Labour PartyEd Miliband, stated that he thought that the Labour government's decision to permit the unlimited immigration of eastern European migrants had been a mistake, arguing that they had underestimated the potential number of migrants and that the scale of migration had had a negative impact on wages.[44][45]

A report by the Department for Communities and Local Government (DCLG) entitled International Migration and Rural Economies, suggests that intra-EU migration since enlargement has resulted in migrants settling in rural locations without a prior history of immigration.[46]

Research published by University College London in July 2009 found that, on average, A8 migrants were younger and better educated than the native population, and that if they had the same demographic characteristics of natives, would be 13 per cent less likely to claim benefits and 28 per cent less likely to live in social housing.[47][48]

2. Racism

The United Kingdom, like most countries,[1] has racism between its citizens. Relations between non-white and white Britons have resulted in cases of race riots and racist murder perpetrated by extremists of all races.

Overview[edit]

Since World War I, public expressions of racism have primarily been limited to far-right political parties such as the British National Front in the 1970s, while most mainstream politicians have publicly condemned all forms of racism.[citation needed] However some argue that racism remains common, and some politicians and public figures have been accused of promoting racist attitudes in the media, particularly with regard to immigration.[2] There have been growing concerns in recent years about institutional racism in public and private bodies, and the tacit support this gives to crimes resulting from racism.

The Race Relations Act 1965 outlawed public discrimination, and established the Race Relations Board. Further Acts in 1968 and 1976 outlawed discrimination in employment, housing and social services, and replaced the Race Relations Board with Commission for Racial Equality.[3] The Human Rights Act 1998 made organisations in Britain, including public authorities, subject to the European Convention on Human Rights.[4] The Race Relations Act 2000 extends existing legislation for the public sector to the police force, and requires public authorities to promote equality.

Although various anti-discrimination legislation do exist, according to some sources most employers in the UK remain institutionally racist including public bodies such as the police[5] and particularly the legal professions.[6][7] The situation with the implementation of Human Rights law is similar. The Terrorism Acts, which came into law in 2000 and 2006, have caused a marked increase in racial profiling and have also been the basis to justify existent trends in discrimination against persons of Muslim origin (or resembling such) by the British police.[citation needed]

There have been tensions over immigration since the early 1900s, especially from Russia, Poland, and other parts of Eastern Europe. Britain first began restricting immigration in 1905 under the Aliens Restriction Act. This was mainly aimed at Jews fleeing persecution in Russia. Before the Act Britain had a liberal immigration policy, most notably throughout the Victorian Period. Although the Act was extreme, Britain maintained an asylum policy for those fleeing religious or political persecution. However, asylum was curtailed in the 1930s to limit entry by refugees from Nazi policies. Despite restrictions, Britain was among the nations which accepted many immigrants prior to and following WWII.

Britain again restricted immigration in the early 1960s. Legislation was targeted at emigration from the Commonwealth of Nations, who had previously been able to migrate to the UK under the British Nationality Act 1948Conservative MP Enoch Powell made the controversial 1968 Rivers of Blood speech in opposition to Commonwealth immigration to Britain; this resulted in him being swiftly removed from the Shadow Cabinet.

Virtually all legal immigration, except for those claiming refugee status, ended with the Immigration Act 1971; however, free movement for citizens of the European Union was later established by the Immigration Act 1988. Legislation in 1993, 1996 and 1999 gradually decreased the rights and benefits given to those claiming refugee status ("asylum seekers"). 582,000 people came to live in the UK from elsewhere in the world in 2004 according to the Office for National Statistics.

Some commentators[who?] believe that an amount of racism, from within all communities, has been undocumented within the UK, adducing the many British cities whose populations have a clear racial divide. While these commentators[who?] believe that race relations have improved immensely over the last thirty years, they still believe that racial segregation remains an important but largely unaddressed problem, although research[8] has shown that ethnic segregation has reduced within England and Wales between the1991 Census and 2001 Census.[citation needed]

The United Kingdom has been accused of "sleepwalking toward apartheid" by Trevor Phillips, chair of that country's Commission for Racial Equality. Philips has said that Britain is fragmenting into isolated racial communities: "literal black holes into which no one goes without fear and trepidation and nobody escapes undamaged". Philips believes that racial segregation in Britain is approaching that of the United States. "You can get to the point as they have in the U.S. where things are so divided that there is no turning back."[9]

The BBC has reported that the latest crime statistics appear to support Phillips' concerns. They show that race-hate crimes increased by almost 600 per cent in London in the month after the July 7 bomb attacks, with 269 more offences allegedly "motivated by religious hatred" reported to the Metropolitan Police, compared to the same period last year.[9]

Public sector employers in the UK are somewhat less likely to discriminate on grounds of race, as they are required by law to promote equality and make efforts to reduce racial and other discrimination. The private sector, however are subject to little or no functional anti-discrimination regulation and short of self paid litigation, no remedies are available for members of ethnic minorities.[7] UK employers can also effectively alleviate themselves from any legal duty not to discriminate on the basis of race, by 'outsourcing' recruitment and thus any liability for the employers' racial screening and discriminatory policies to third party recruitment companies.[10][11]

Race riots[edit]

There were fierce race riots targeting ethnic minority populations across the United Kingdom in 1919: South Shields,[12] Glasgow, London's East EndLiverpoolCardiffBarry, and Newport.[citation needed] There were further riots targeting immigrant and minority populations in East London and Notting Hill in the 1950s.

In the early 1980s, societal racism, discrimination and poverty — alongside further perceptions of powerlessness and oppressive policing — sparked a series of riots in areas with substantial African-Caribbean populations.[13] These riots took place in St Pauls in 1980,BrixtonToxteth and Moss Side in 1981, St Pauls again in 1982, Notting Hill Gate in 1982, Toxteth in 1982, and HandsworthBrixton andTottenham in 1985.[14]

A 2004 report identified both "racial discrimination" and an "extreme racial disadvantage" in Britain, concluding that urgent action was needed to prevent these issues becoming an "endemic, ineradicable disease threatening the very survival of our society".[13] The era saw an increase in attacks on Black people by White people. The Joint Campaign Against Racism committee reported that there had been more than 20,000 attacks on non-indigenous Britons including Britons of Asian origin during 1985.[15]

Both the Bradford riots and the Oldham Riots occurred in 2001, following cases of racism. These were either the public displays of racist sentiment or, as in the Brixton Riots,racial profiling and alleged harassment by police forces. In 2005, there were the Birmingham riots, derived from ethnic tensions between the British African-Caribbean people andBritish Asian communities, with the spark for the riot being an alleged gang rape of a teenage black girl by a group of South Asian men.

Racism by country[edit]

England[edit]

Cornwall[edit]

Human rights activist Peter Tatchell campaigned on the issue of the Constitutional status of Cornwall. In November 2008, The Guardian carried an article by him entitled Self-rule for Cornwall,[16] in which he said:

Like Wales and ScotlandCornwall considers itself a separate Celtic nation – so why shouldn't it have independence? [...]

[Cornish] Nationalists argue that Cornwall is a subjugated nation, in much the same way that Scotland and Wales once were. Not only is the historic Cornish flag – a white cross on a black background – excluded from the Union Jack; until not so long ago Cornish people needed planning permission to fly it. Comparisons with Scotland and Wales are valid. After all, Cornwall has all the basic cultural attributes of a nation: its own distinct Celtic language, history, festivals, cuisine, music, dance and sports. Many Cornish people perceive themselves to be other than English. Despite the government's resistance, under [the] Commission for Racial Equality and Council of Europe guidelines, they qualify for recognition as a national minority. [...]

Cornwall was once separate and self-governing. If the Cornish people want autonomy and it would improve their lives, why shouldn't they have self-rule once again? Malta, with only 400,000 people, is an independent state within the EU. Why not Cornwall?[16]

This article received the largest number of comments to any Guardian article, according to This is Cornwall.[17] Over 1,500 comments were made, and while some comments were supportive, Tatchell found himself "shocked and disgusted" by the anti-Cornish sentiment shown by many commenters.[17]

Northern Ireland[edit]

Since the end of the TroublesNorthern Ireland's immigrant community has tripled[18] and there has been a sharp rise in racist incidents. It has the highest number of racist incidents per person in the UK,[19][20][21] and has been branded the "race-hate capital of Europe".[22] Foreigners are three times more likely to suffer a racist incident in Northern Ireland than elsewhere in the UK.[23]

Most racist incidents happen in loyalist Protestant areas. Police say members of loyalist paramilitary groups have orchestrated a series of racist attacks aimed at "ethnically cleansing" these areas.[24] There have been pipe bombpetrol bomb and gun attacks on the homes of immigrants.[25][26][27][28][29] Masked gangs have also ransacked immigrants' homes and assaulted the residents.[20] In 2009, more than 100 Roma were forced to flee their homes in Belfast following sustained attacks by a racist mob, who allegedly threatened to kill them.[30][31][32] That year, a Polish immigrant was beaten to death in a racist attack in Newry.[33] Police recorded more than 1,100 racist incidents in 2013/14, but they believe most incidents are not reported to them.[24]

Scotland[edit]

In 2005 and 2006 1,543 victims of racist crime in Scotland were of Pakistani origin, while more than 1,000 victims were classed as being "white British"[34] although the Scottish Parliament still has no official policy on "white on white" racism in Scotland.

Kriss Donald was a Scottish fifteen-year-old who was kidnapped and murdered in Glasgow in 2004. Five British Pakistani men were later found guilty of racially motivated violence; those convicted of murder were all sentenced to life imprisonment.[35]

However, there are indications that the Scottish authorities and people are well aware of the problem and are trying to tackle it. Among Scots under 15 years old there is the sign that, "younger white pupils rarely drew on racist discourses."[36]

In 2009 the murder of an Indian sailor named Kunal Mohanty by a lone Scotsman named Christopher Miller resulted in Miller's conviction as a criminal motivated by racial hatred. Miller's brother gave evidence during the trial and said Miller told him he had "done a Paki".[37]

As of 11 February 2011 attacks on Muslims in Scotland have contributed to a 20% increase in racist incidents over the past 12 months. Reports say every day in Scotland, 17 people are abused, threatened or violently attacked because of the colour of their skin, ethnicity or nationality. Statistics showed that just under 5,000 incidents of racism were recorded in 2009/10, a slight decrease from racist incidents recorded in 2008/9.[38]

From 2004 to 2012 the rate of racist incidents has been around 5,000 incidents per year.[38] In 2011-12, there were 5,389 racist incidents recorded by the police, which is a 10% increase on the 4,911 racist incidents recorded in 2010-11.[38]

Racism in the police[edit]

Police force[edit]

Various police departments in the United Kingdom (such as the Greater Manchester Police, the Metropolitan Police Service, the Sussex Police and the West Yorkshire Policeservices)[39] have been accused of institutionalised racism throughout the late 20th and 21st centuries, by people such as the Chief Constable of the GMP in 1998 (David Wilmot); the BBC's Secret Policemen documentary 5 years later (which led to the resignation of 6 officers);[40] Metropolitan Police Commissioner Bernard Hogan-Howe[41] and the Metropolitan Black Police Association.[42]

The National Black Police Association which allows only AfricanAfrican-Caribbean and Asian officers as full members has been criticised as a racist organization by some because of its selective membership criteria based on ethnic origin.[43][44][45]

Michael Wilkes from the British Chinese Project said that racism against them isn't taken as seriously as racism against African-Caribbean or Asian people, and that a lot of racist attacks towards the Chinese community go unreported, primarily because of widespread mistrust in the police.[46]

Prison staff[edit]

Prison guards are almost twice as likely to be reported for racism than inmates in the UK; with racist incidents between prison guards themselves being nearly as high as that between guards and prisoners. The environment has been described as a dangerous breeding ground for racist extremism.[47]

2. Racism

  1. After 2001 – instability and ‘the end of multiculturalism?’

  1. Bradford Riots – were a short but intense period of rioting which began on 7 July 2001, in Bradford, West Yorkshire, England. It occurred as a result of heightened tension between the large and growing British Asian communities and the city’s white majority, escalated by confrontation between the Anti-Nazi League and the for rifgt groups such as the British National Party and the National Front.

Riots[edit]

The riot was estimated to have involved 1,000 youths.[10] On the nights of 8 and 9 July, groups of between thirty and a hundred white youths attacked police and Asian-owned businesses, in the Ravenscliffe and Holmewood areas.[3] Initially there were 500 police being involved, but later reinforcements increased this to almost 1,000.[12] What began as a riot turned into an ethnic-related disturbance, with targeting of businesses and cars, along with numerous attacks on shops and property. A notable point of the rioting was the firebombing of Manningham Labour Club, at the time a recreational centre. A 48-year-old Asian businessman was jailed for twelve years for the arson attack.

Aftermath[edit]

More than 300 police officers were hurt during the riot. There were 297 arrests in total;

Oldham Riots – 2001- short but intense period of violate rioting which occurred in Oldham in may 2001.

They were the worst ethnically-motivated riots in the United Kingdom since 1985,

They were particularly intensive in Glodwick, an area to the south-east of Oldham town centre.

They were highly violent and led to the use of petrol bombs, bricks, bottles and other such projectiles by up to five-hundred Asian youths as they battled against lines of riot police.[3] At least 20 people were injured in the riots, including fifteen officers, and 37 people were arrested.[4] Other parts of Oldham such as Coppice and Westwood were also involved.

Ritchie Report[edit]

The Ritchie Report was a major review both of the Oldham Riots and the inter-ethnic problems that had long existed in the town. It was commissioned by the government, theMetropolitan Borough of Oldham and the local police authority. It was named after David Ritchie, Chairman of the Oldham Independent Review.

The report, published on 11 December 2001, was a 102-page document, addressed to the people of Oldham and was the sum total of much evidence gathering, including the interviewing of some 915 people and over 200 group meetings with local residents and governmental bodies.[1]

The Ritchie Report largely blamed deep-rooted segregation, which authorities had failed to address for generations, as the cause of the Oldham Riots and its prior and subsequent inter-ethnic problems.

It warned: "Segregation, albeit self-segregation, is an unacceptable basis for a harmonious community and it will lead to more serious problems if it is not tackled".[19]

Sentencing[edit]

On 12 June 2003, 10 people were all jailed for nine months each after being convicted of their part in the rioting.

They were; Darren Hoy (aged 27 and from Fitton Hill district of the town), his sister Sharon Hoy (aged 38 and from the Raper Street neighbourhood), their cousin Matthew Berry (aged 25 and from the Limedale district of the town), James Clift (aged 24 and from Chadderton), Mark Priest (aged 32 and from Glossop in Derbyshire), Alan Daley (aged 38 and from Failsworth), David Bourne (aged 35 and from Limeside), Steven Rhodes (aged 30 and from the Medway Road neighbourhood), Paul Brockway (aged 39 and fromBlackley) and 22-year-old Failsworth man Stephen Walsh. A 16-year-old boy and a 17-year-old girl were also convicted of involvement in the riot but avoided prison sentences and instead received a supervision order and conditional discharge respectively.[20]

Judge Jonathan Geake noted that none of the defendants were responsible for the rioting, and had directed the jury to clear the defendants of the charge of riot, before all 12 pleaded guilty to either affray or common assault.[20]

Cantle Report[edit]

Published 25 May 2006, on the eve of the fifth anniversary of the Oldham riots, The Cantle Report 2006 was a 64-page document put together by senior government advisor, Professor Ted Cantle of the Institute of Community Cohesion.

It was commissioned by the Oldham Metropolitan Borough Council to independently review the towns' progress in its efforts to achieve racial harmony and community cohesion.

The report praised the council and town for its considerable progress and efforts, but said much more needed to be achieved given Oldham's projected increase in ethnic diversity in the coming decades ahead. According to the report, the review teams were "struck by the extent to which divisions within and polarisation between Oldham's many communities continue to be a feature of social relations and the seeming reluctance of many sections of the community to embrace positive change".[21]

The report broadly had three messages:

In interviews with both the Oldham Evening Chronicle and BBC Radio, Cantle accused some community leaders of hindering progress because they were worried about losing their political influence. "We did find that a number of the communities, and particularly the community leaders were unwilling to get out of their comfort zones and that's a really big issue now".[22]

Legacy and impact[edit]

The legacy of the riots is broad and still in motion, but has seen increased ethnic-relations and some community-amenity improvements in the town including the creation of a new Oldham Cultural Quarter (which includes the state-of-the-art Gallery Oldham and Oldham Library), and a number of proposed improvements and investments for the community facilities of the town.

The community facilities currently available in Oldham have been heavily criticised, with not only Oldham but the entire Metropolitan Borough of Oldham now being the largest town without a major commercial cinema complex.

Some of the bodies and reports which proposed new community and amenity improvements included, Oldham Beyond (April 2004), Forward Together (October 2004), and The Heart of Oldham (May 2004).

Several men, mainly of Bangladeshi heritage were ultimately arrested and charged in connection to the riots.[citation needed]

Immediately after the Oldham Riots, the British National Party received an increase in the share of votes in both local and general elections; however, they have not won a seat to represent any part of the Metropolitan Borough of Oldham in the House of Commons or the Oldham Metropolitan Borough Council.

In the 2006 local elections, the BNP's share of votes decreased markedly, which was highlighted in The Cantle Report during the same year.[citation needed]

  1. 7/7 London bombings

The 7 July 2005 London bombings (often referred to as 7/7) were a series of coordinated suicide bomb attacks in central London, which targeted civilians using the public transport system during the morning rush hour.

On the morning of Thursday, 7 July 2005, four Islamist men detonated four bombs—three in quick succession aboard London Underground trains across the city and, later, a fourth on a double-decker bus in Tavistock Square. As well as the four bombers, 52 civilians were killed and over 700 more were injured in the attacks, the United Kingdom's worst terrorist incident since the 1988Lockerbie bombing as well as the country's first ever suicide attack.

The explosions were caused by homemade organic peroxide-based devices packed into backpacks. The bombings were followed two weeks later by a series of attempted attacks that failed to cause injury or damage.

3. Why does Britain produce so many subcultures?

History of Youth Subcultures

Prior to World War II, young people in Western culture had little freedom or influence. The concept of the teenager merged in post war Britain and has its origins in America. Some reasons for the emergence of the teenager are:

• The post war baby boom – after the war soldiers returned home and started families

• Affluence and women in work – the general standards of living were rising including pay. More women also began to work and giving many families a dual income. Consequently, young people were not expected to give all of their working wages to their parents and had disposable income for the first time. This meant they could spend money on having fun and being young before they had to take on greater responsibilities. However, Abel-Smith and Townsend in ‘The Poor and the Poorest’ (1965) suggest that the idea of a general affluence amongst all sections of society in Britain was largely a myth.

• Rise of consumer culture – Throughout the 1950s, the growing numbers of young people began to influence music, television and cinema, spurring the explosion of rock and roll in the late 1950s and a full blown youth culture in the mid 1960s, partly in the form of subcultures such as mods, rockers and hippies. As teenagers created their own identity and their expendable income increased, marketing companies focused their efforts on them. The tastes of young people began to drive fashion, music, films and literature. Companies adapted to this by devising marketing strategies, creating magazines such as NME and eventually their own TV channel, MTV. Soon a mass of fashion stores, coffee houses, discos, music and other commodities rose, all targeting the affluent teenager. Through advertising, they promised a new, exciting world for young people – that could be experienced through the consumption of their products and services. The growth of capitalist culture and leisure industries has meant that all young people have access to the cultural resources they need to engage in ‘symbolic creativity’ in their leisure time. Therefore, the media and consumer industries played a large part in creating an identity for teenagers.

• Independence – young people also started to get married later, move out of their family home before they married, and due to the introduction of contraception, have pre-marital sex.

• Range of styles available - Willis argues that the age of spectacular subcultures are gone for good. This is because there are so many style and taste cultures which offer young people different ways of expressing their identity. He claims that there is too much diversity for any single youth subculture to dominate society.

• Extension of Education - The creation of youth cultures was accelerated by the introduction of public money for schools. In 1875, the Supreme Court made a decision that public money could be used to fund school education. This meant that adolescents and children were gathering together daily, creating their own identities and culture

The extension of education to 14/16 years led to young people seeing themselves as ‘different’ i.e. going through a ‘special phase’ in their development. This led to the development of specific types of youth culture that reflected the ‘special importance’ that society gives to this period in their life. However, this fails to explain the behavior of all teenagers. Why, for example, do some conform whilst others rebel? As a result of these changes there many different youth subcultures have developed

Social constr.

Social constructionism or the social construction of reality (also social concept) is a theory of knowledge in sociology andcommunication theory that examines the development of jointly constructed understandings of the world. It assumes that understanding, significance, and meaning are developed not separately within the individual, but in coordination with other human beings. The elements most important to the theory are (1) the assumption that human beings rationalize their experience by creating a model of the social world and how it functions and (2) that language is the most essential system through which humans construct reality.[1]

Definition[edit]

Social constructs are the by-products of countless human choices, rather than laws related to human judgment. Social constructionism is not related to anti-determinism, though. Social constructionism is typically positioned in opposition to essentialism, which sees phenomena in terms of inherent, transhistorical essences independent of human judgment.[2]

A major focus of social constructionism is to uncover the ways in which individuals and groups participate in the construction of their perceived social reality. It involves looking at the ways social phenomena are created, institutionalized, known, and made into tradition by humans. The social construction of reality is an ongoing, dynamic process that is (and must be) reproduced by people acting on their interpretations and their knowledge of it. Because social constructs as facets of reality and objects of knowledge are not "given" by nature, they must be constantly maintained and re-affirmed in order to persist. This process also introduces the possibility of change: i.e. what "justice" is and what it means shifts from one generation to the next.

Ian Hacking noted in The Social Construction of What? that social construction talk is often in reference not only to worldly items, like things and facts – but also to beliefs about them.[3]

Origins[edit]

Although the precise origin of social constructionism is debatable, it is generally considered[by whom?] that within the context of social theory, social constructionism emerged during the 1980s and further developed during the 1990s. This is evident from the list of academic works with the words "Social Construction of" in their title which Ian Hacking lists on the first page of his book The Social Construction of What?. Hacking lists two titles from 1970s, eight from the 1980s, and twenty-one from the 1990s[4] This chronology is corroborated by Dave Elder-Vass in his book The Reality of Social Construction.[5]

Dave Elder-Vass cites the Berger and Luckmann book The Social Construction of Reality, originally published in 1966, as the work "which introduced the term social construction to sociologists and began the trajectory. . .[of the development of social constructionism][6]

Andy Lock and Tom Strong trace some of the fundamental tenets of social constructionism back to the work of the 18th century Italian political philosopher, rhetorician, historian, and jurist Giambattista Vico[7]

According to Lock and Strong, other influential thinkers whose work has had an impact on the development of social constructionism are: Edmund HusserlAlfred SchutzMaurice Merleau-PontyMartin HeideggerHans-Georg GadamerPaul RicoeurJürgen HabermasEmmanuel LevinasMikhail BakhtinValentin VolosinovLev VygotskyGeorge Herbert MeadLudwig WittgensteinGregory BatesonHarold GarfinkelErving GoffmanAnthony GiddensMichel FoucaultKen GergenMary GergenRom Harre, and John Shotter[8]

Based on the above, it could be surmised that the intellectual foundations of social constructionism span phenomenologyhermeneuticspoststructuralismsymbolic interactionism,[9] as well as some strands of literary criticism and social psychology.

Social constructionist analysis[edit]

"Social construction" may mean many things to many people. Ian Hacking, having examined a wide range of books and articles with titles of the form "The social construction of X" or "Constructing X", argues that when something is said to be "socially constructed", this is shorthand for at least the following two claims:

(0) In the present state of affairs, X is taken for granted; X appears to be inevitable.[10]:12

(1) X need not have existed, or need not be at all as it is. X, or X as it is at present, is not determined by the nature of things; it is not inevitable.[10]:6

Hacking adds that the following claims are also often, though not always, implied by the use of the phrase "social construction":

(2) X is quite bad as it is.

(3) We would be much better off if X were done away with, or at least radically transformed.[10]:6

Thus a claim that gender is socially constructed probably means that gender, as currently understood, is not an inevitable result of biology, but highly contingent on social and historical processes. In addition, depending on who is making the claim, it may mean that our current understanding of gender is harmful, and should be modified or eliminated, to the extent possible.

According to Hacking, "social construction" claims are not always clear about exactly what isn't "inevitable", or exactly what "should be done away with." Consider a hypothetical claim that quarks are "socially constructed". On one reading, this means that quarks themselves are not "inevitable" or "determined by the nature of things." On another reading, this means that our idea (or conceptualization, or understanding) of quarks is not "inevitable" or "determined by the nature of things".The distinction between "quarks themselves" and "our idea (or conceptualization, or understanding) of quarks" will undoubtedly trouble some with a philosophical bent. Hacking's distinction is based on intuitive metaphysics, with a split between things out in the world, on one hand, and ideas thereof in our minds, on the other. Hacking is less advocating a serious, particular metaphysics than he is suggesting a useful way to analyze claims about "social construction".[10]:21–24

Hacking is much more sympathetic to the second reading than the first.[10]:68–70 Furthermore, he argues that, if the second reading is taken, there need not always be a conflict between saying that quarks are "socially constructed" and saying that they are "real".[10]:29–30 In our gender example, this means that while a legitimate biological basis for gender may exist, some of society's perceptions of gender may be socially constructed.

The stronger first position, however, is more-or-less an inevitable corollary of Willard Van Orman Quine's concept of ontological relativity, and particularly of the Duhem-Quine thesis. That is, according to Quine and like-minded thinkers (who are not usually characterized as social constructionists) there is no single privileged explanatory framework that is closest to "the things themselves"—every theory has merit only in proportion to its explanatory power.[11]

As we step from the phrase to the world of human beings, "social construction" analyses can become more complex. Hacking briefly examines Helène Moussa’s analysis of the social construction of "women refugees".[10]:9–10 According to him, Moussa's argument has several pieces, some of which may be implicit:

  1. Canadian citizens' idea of "the woman refugee" is not inevitable, but historically contingent. (Thus the idea or category "the woman refugee" can be said to be "socially constructed".)

  2. Women coming to Canada to seek asylum are profoundly affected by the category of "the woman refugee". Among other things, if a woman does not "count" as a "woman refugee" according to the law, she may be deported, and forced to return to very difficult conditions in her homeland.

  3. Such women may modify their behavior, and perhaps even their attitudes towards themselves, in order to gain the benefits of being classified as a "woman refugee".

  4. If such a woman does not modify her behavior, she should be considered un-Canadian and as such should not be admitted to citizenship.

Hacking suggests that this third part of the analysis, the "interaction" between a socially constructed category and the individuals that are actually or potentially included in that category, is present in many "social construction" analyses involving types of human beings.

″Social construction-ism accepts that there is an objective reality. It is concerned with how knowledge is constructed and understood. It has therefore an epistemological not an ontological perspective. Criticisms and misunderstanding arise when this central fact is misinterpreted. This is most evident in debates and criticisms surrounding realism and relativism. The words of Kirk and Miller are relevant when they suggest that the search for a final, absolute truth be left to philosophers and theologians. Social construction-ism places great emphasis on everyday interactions between people and how they use language to construct their reality. It regards the social practices people engage in as the focus of enquiry″.[12] According to him, Kirk and Miller investigated sam shepard case- led to shepard acquittal:

Applications[edit]

Personal construct psychology[edit]

Since its appearance in the 1950s, personal construct psychology (PCP) has mainly developed as a constructivist theory of personality and a system of transforming individual meaning-making processes, largely in therapeutic contexts.[13][14][15][16][17][18] It was based around the notion of persons as scientists who form and test theories about their worlds. Therefore, it represented one of the first attempts to appreciate the constructive nature of experience and the meaning persons give to their experience.[19] Social constructionism (SC), on the other hand, mainly developed as a form of a critique,[20] aimed to transform the oppressing effects of the social meaning-making processes. Over the years, it has grown into a cluster of different approaches,[21] with no single SC position.[22] However, different approaches under the generic term of SC are loosely linked by some shared assumptions about language, knowledge, and reality.[23]

A usual way of thinking about the relationship between PCP and SC is treating them as two separate entities that are similar in some aspects, but also very different in others. This way of conceptualizing this relationship is a logical result of the circumstantial differences of their emergence. In subsequent analyses these differences between PCP and SC were framed around several points of tension, formulated as binary oppositions: personal/social; individualist/relational; agency/structure; constructivist/constructionist.[24][25][26][27][28][29] Although some of the most important issues in contemporary psychology are elaborated in these contributions, the polarized positioning also sustained the idea of a separation between PCP and SC, paving the way for only limited opportunities for dialogue between them.[30][31]

Reframing the relationship between PCP and SC may be of use in both the PCP and the SC communities. On one hand, it extends and enriches SC theory and points to benefits of applying the PCP “toolkit” in constructionist therapy and research. On the other hand, the reframing contributes to PCP theory and points to new ways of addressing social construction in therapeutic conversations.[32]

Educational psychology[edit]

Social constructivism has been studied by many educational psychologists, who are concerned with its implications for teaching and learning. For more on the psychological dimensions of social constructivism, see the work of Ernst von Glasersfeld and A. Sullivan Palincsar.[33]

Systemic therapy[edit]

Systemic therapy is a form of psychotherapy which seeks to address people as people in relationship, dealing with the interactions of groups and their interactional patterns and dynamics.

Teleology of social construction[edit]

The concepts of weak and strong as applied to opposing philosophical positions, "isms", inform a teleology – the goal-oriented, meaningful or "final end" of an interpretation of reality. "Isms" are not personal opinions, but the extreme, modal, formulations that actual persons, individuals, can then consider, and take a position between. There are opposing philosophical positions concerning the feasibility of co-creating a common, shared, social reality, called weak and strong.

John R. Searle does not elucidate the terms strong and weak in his book The Construction of Social Reality,[34] but he clearly uses them in his Chinese room argument, where he debates the feasibility of creating a computing machine with a sharable understanding of reality, and he adds "We are precisely such machines." Strong artificial intelligence (Strong AI) is the bet that computer programmers will somehow eventually achieve a computing machine with a mind of its own, and that it will eventually be more powerful than a human mind. Weak AI bets they won't.

David Deutsch in his book The Fabric of Reality uses a form of strong Turing principle to share Frank Tipler's view of the final state of the universe as an omnipotent (but notomniscient), Omega point, computer. But this computer is a society of creative thinkers, or people (albeit posthuman transhuman persons), having debates in order to generate information, in the never-ending attempt to attain omniscience of this physics—its evolutionary forms, its computational abilities, and the methods of its epistemology—having an eternity to do so. (p. 356)

Because both the Chinese room argument and the construction of social reality deal with Searle and his debates, and because they both use weak and strong to denote a philosophical position, and because both debate the programmability of "the other", it is worth noting the correspondence that "strong AI" is strong social constructionism, and "weak AI" is weak social constructivism.

Strong social constructiv"ism" says "none are able to communicate either a full reality or an accurate ontology, therefore my position must impose, by a sort of divine right, my observer-relative epistemology", whereas weak social constructivism says "none are able to know a full reality, therefore we must cooperate, informing and conveying an objective ontology as best we can."[35]

Weak teleology [edit]

Weak social constructionism sees the underlying, objective, "brute fact" elements of the class of languages and functional assignments of human, metaphysical, reality. Brute facts are all facts that are not institutional (metaphysical, social agreement) facts. The skeptic portrays the weak aspect of social constructivism, and wants to spend effort debating the institutional realities.

Harvard psychologist Steven Pinker[36] writes that "some categories really are social constructions: they exist only because people tacitly agree to act as if they exist. Examples include money, tenurecitizenship, decorations for bravery, and the presidency of the United States."

In a similar vein, Stanley Fish[37] has suggested that baseball's "balls and strikes" are social constructions.[38]:29–31

Both Fish and Pinker agree that the sorts of objects indicated here can be described as part of what John Searle calls "social reality."[39]:22 In particular, they are, in Searle's terms, ontologically subjective but epistemologically objective.[34]:63 "Social facts" are temporally, ontologically, and logically dependent on "brute facts." For example, "money" in the form of its raw materials (rag, pulp, ink) as constituted socially for barter (for example by a banking system) is a social fact of "money" by virtue of (i) collectively willing and intending (ii) to impose some particular function (purpose for which), (iii) by constitutive rules atop the "brute facts." "Social facts have the remarkable feature of having no analogue among physical brute facts" (34). The existence of language is itself constitutive of the social fact (37), which natural or brute facts do not require. Natural or "brute" facts exist independently of language; thus a "mountain" is a mountain in every language and in no language; it simply is what it is.[34]:29, et seq

Searle illustrates the evolution of social facts from brute facts by the constitutive rule: X counts as Y in C. "The Y terms has to assign a new status that the object does not already have just in virtue of satisfying the Y term; and there has to be collective agreement, or at least acceptance, both in the imposition of that status on the stuff referred to by the X term and about the function that goes with that status. Furthermore, because the physical features brute facts specified by the X term are insufficient by themselves to guarantee the fulfillment of the assigned function specified by the Y term, the new status and its attendant functions have to be the sort of things that can be constituted by collective agreement or acceptance."[34]:44

It is true that language is not a "brute fact," that it is an institutional fact, a human convention, a metaphysical reality (that happens to be physically uttered), but Searle points out that there are language-independent thoughts "noninstitutional, primitive, biological inclinations and cognitions not requiring any linguistic devices," and that there are many "brute facts" amongst both humans and animals that are truths that should not be altered in the social constructs because language does not truly constitute them, despite the attempt to institute them for any group's gain: money and property are language dependent, but desires (thirst, hunger) and emotions (fear, rage) are not.[34]:62 (Descartes describes the difference between imagination as a sort of vision, or image, and intellect as conceptualizing things by symbolic manipulation.) Therefore, there is doubt that society or a computer can be completely programmed by language and images, (because there is a programmable, emotive effect of images that derives from the language of judgment towards images).

Finally, against the strong theory and for the weak theory, Searle insists, "it could not be the case, as some have maintained, that all facts are institutional [i.e., social] facts, that there are no brute facts, because the structure of institutional facts reveals that they are logically dependent on brute facts. To suppose that all facts are institutional [i.e., social] would produce an infinite regress or circularity in the account of institutional facts. In order that some facts are institutional, there must be other facts that are brute [i.e., physical, biological, natural]. This is the consequence of the logical structure of institutional facts.".[34]:56

Ian HackingCanadian philosopher of science, insists, "the notion that everything is socially constructed has been going the rounds. John Searle [1995] argues vehemently (and in my opinion cogently) against universal constructionism."[10]:24 "Universal social constructionism is descended from the doctrine that I once named linguistic idealism and attributed, only half in jest, to Richard Nixon [Hacking, 1975, p. 182]. Linguistic idealism is the doctrine that only what is talked about exists, nothing has reality until it is spoken of, or written about. This extravagant notion is descended from Berkeley's idea-ism, which we call idealism: the doctrine that all that exists is mental."[10]:24 "They are a part of what John Searle [1995] calls social reality. His book is titled the Construction of Social Reality, and as I explained elsewhere [Hacking, 1996], that is not a social construction book at all."[10]:12

Hacking observes, "the label 'social constructionism' is more code than description"[10]:15 of every Leftist, Marxist, Freudian, and Feminist PostModernist to call into question every moral, sex, gender, power, and deviant claim as just another essentialist claim—including the claim that members of the male and female sex are inherently different, rather than historically and socially constructed. Hacking observes that his 1995 simplistic dismissal of the concept actually revealed to many readers the outrageous implications of the theorists: Is child abuse a real evil, or a social construct, asked Hacking? His dismissive attitude, "gave some readers a way to see that there need be no clash between construction and reality,"[10]:29 inasmuch as "the metaphor of social construction once had excellent shock value, but now it has become tired."[10]:35

Informally, they require human practices to sustain their existence, but they have an effect that is (basically) universally agreed upon. The disagreement lies in whether this category should be called "socially constructed." Ian Hacking[40] argues that it should not. Furthermore, it is not clear that authors who write "social construction" analyses ever mean "social construction" in Pinker's sense. ".[41] If they never do, then Pinker (probably among others) has misunderstood the point of a social constructionist argument.

To understand how weak social constructionism can conclude that metaphysics (a human affair) is not the entire "reality," see the arguments against the study metaphysics. This inability to accurately share the full reality, even given time for a rational conversation, is similarly proclaimed by weak artificial intelligence.

History and development[edit]

Berger and Luckmann[edit]

Constructionism became prominent in the U.S. with Peter L. Berger and Thomas Luckmann's 1966 book, The Social Construction of Reality. Berger and Luckmann argue that all knowledge, including the most basic, taken-for-granted common sense knowledge of everyday reality, is derived from and maintained by social interactions. When people interact, they do so with the understanding that their respective perceptions of reality are related, and as they act upon this understanding their common knowledge of reality becomes reinforced. Since this common sense knowledge is negotiated by people, human typificationssignifications and institutions come to be presented as part of an objective reality, particularly for future generations who were not involved in the original process of negotiation. For example, as parents negotiate rules for their children to follow, those rules confront the children as externally produced "givens" that they cannot change. Berger and Luckmann's social constructionism has its roots in phenomenology. It links toHeidegger and Edmund Husserl through the teaching of Alfred Schutz, who was also Berger's PhD adviser.

Narrative turn[edit]

During the 1970s and 1980s, social constructionist theory underwent a transformation as constructionist sociologists engaged with the work of Michel Foucault and others as anarrative turn in the social sciences was worked out in practice. This had a particular impact on the emergent sociology of science and the growing field of science and technology studies. In particular, Karin Knorr-CetinaBruno LatourBarry BarnesSteve Woolgar, and others used social constructionism to relate what science has typically characterized as objective facts to the processes of social construction, with the goal of showing that human subjectivity imposes itself on those facts we take to be objective, not solely the other way around. A particularly provocative title in this line of thought is Andrew Pickering's Constructing Quarks: A Sociological History of Particle Physics. At the same time, Social Constructionism shaped studies of technology – the Sofield, especially on the Social construction of technology, or SCOT, and authors as Wiebe BijkerTrevor PinchMaarten van Wesel, etc.[42][43] Despite its common perception as objective, mathematics is not immune to social constructionist accounts. Sociologists such as Sal Restivo and Randall Collins, mathematicians including Reuben Hersh and Philip J. Davis, and philosophers including Paul Ernest have published social constructionist treatments of mathematics.

Postmodernism[edit]

Social constructionism can be seen as a source of the postmodern movement, and has been influential in the field of cultural studies. Some have gone so far as to attribute the rise of cultural studies (the cultural turn) to social constructionism. Within the social constructionist strand of postmodernism, the concept of socially constructed reality stresses the ongoing mass-building of worldviews by individuals in dialectical interaction with society at a time. The numerous realities so formed comprise, according to this view, theimagined worlds of human social existence and activity, gradually crystallised by habit into institutions propped up by language conventions, given ongoing legitimacy bymythology, religion and philosophy, maintained by therapies and socialization, and subjectively internalised by upbringing and education to become part of the identity of social citizens.

In the book The Reality of Social Construction, the British sociologist Dave Elder-Vass places the development of social constructionism as one outcome of the legacy of postmodernism. He writes "Perhaps the most widespread and influential product of this process [coming to terms with the legacy of postmodernism] is social constructionism, which has been booming [within the domain of social theory] since the 1980s."[5]

Criticisms[edit]

Social constructionism falls toward the nurture end of the spectrum of the larger nature and nurture debate. Consequently, critics have argued that it generally ignores biological influences on behavior or culture, or suggests that they are unimportant to achieve an understanding of human behavior.[44] The view of most psychologists and social scientists is that behavior is a complex outcome of both biological and cultural influences.[45][46] Other disciplines, such as evolutionary psychologybehavior geneticsbehavioral neuroscienceepigenetics, etc., take a nature-nurture interactionism approach to understand behavior or cultural phenomena.

In 1996, to illustrate what he believed to be the intellectual weaknesses of social constructionism and postmodernism, physics professor Alan Sokal submitted an article to the academic journal Social Text deliberately written to be incomprehensible but including phrases and jargon typical of the articles published by the journal. The submission, which was published, was an experiment to see if the journal would "publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions."[47] The Postmodernism Generator is a computer program that is designed to produce similarly incomprehensible text.[48] In 1999, Sokal, with coauthor Jean Bricmont published the book Fashionable Nonsense, which criticized postmodernism and social constructionism.

Philosopher Paul Boghossian has also written against social constructionism. He follows Ian Hacking's argument that many adopt social constructionism because of its potentially liberating stance: if things are the way that they are only because of our social conventions, as opposed to being so naturally, then it should be possible to change them into how we would rather have them be. He then states that social constructionists argue that we should refrain from making absolute judgements about what is true and instead state that something is true in the light of this or that theory. Countering this, he states:

"But it is hard to see how we might coherently follow this advice. Given that the propositions which make up epistemic systems are just very general propositions about what absolutely justifies what, it makes no sense to insist that we abandon making absolute particular judgements about what justifies what while allowing us to accept absolutegeneral judgements about what justifies what. But in effect this is what the epistemic relativist is recommending"[49]

Later in the same work, Boghossian severely constrains the requirements of relativism. He states that instead of believing that any world view is just a true as any other (cultural relativism), we should believe that:

"If we were to encounter an actual, coherent, fundamental, genuine alternative to our epistemic system, C2, whose track record was impressive enough to make us doubt the correctness of our own system, C1, we would not be able to justify C1 over C2 even by our own lights.

Woolgar and Pawluch [50] argue that constructionsts tend to 'ontological gerrymander' social conditions in and out of their analysis. Following this point, Thibodeaux [51] argued that constructionism can both separate and combine a subject and their effective environment. To resolve this he argued that objective conditions should be used when analyzing how perspectives are motivated.

Social constructionism has been criticized by evolutionary psychologists, including Steven Pinker in his book The Blank Slate.[citation needed] John Tooby and Leda Cosmides used the term "standard social science model" to refer to social-science philosophies that they argue fail to take into account the evolved properties of the brain.[52]

4. Critical theory

Critical theory is a school of thought that stresses the reflective assessment and critique of society and culture by applying knowledge from the social sciences and the humanities. As a term, critical theory has two meanings with different origins and histories: the first originated in sociology and the second originated in literary criticism, whereby it is used and applied as an umbrella term that can describe a theory founded upon critique; thus, the theorist Max Horkheimer described a theory as critical insofar as it seeks "to liberate human beings from the circumstances that enslave them."[1]

In sociology and political philosophy, the term critical theory describes the neo-Marxist philosophy of the Frankfurt School, which was developed in Germany in the 1930s. Frankfurt theorists drew on the critical methods of Karl Marx and Sigmund Freud. Critical theory maintains that ideology is the principal obstacle to human liberation.[2] Critical theory was established as a school of thought primarily by five Frankfurt School theoreticians: Herbert MarcuseTheodor AdornoMax HorkheimerWalter Benjamin, and Erich Fromm. Modern critical theory has additionally been influenced by György Lukács and Antonio Gramsci, as well as the second generation Frankfurt School scholars, notably Jürgen Habermas. In Habermas's work, critical theory transcended its theoretical roots in German idealism, and progressed closer to American pragmatism. Concern for social "base and superstructure" is one of the remainingMarxist philosophical concepts in much of the contemporary critical theory.[3]

While critical theorists have been frequently defined as Marxist intellectuals,[4] their tendency to denounce some Marxist concepts and to combine Marxian analysis with other sociological and philosophical traditions has resulted in accusations of revisionism byClassicalOrthodox, and Analytical Marxists, and by Marxist-Leninist philosophers. Martin Jay has stated that the first generation of critical theory is best understood as not promoting a specific philosophical agenda or a specific ideology, but as "a gadfly of other systems".[5]

Definitions[edit]

The two meanings of critical theory—from different intellectual traditions associated with the meaning of criticism and critique—derive ultimately from the Greek word κριτικός,kritikos meaning judgment or discernment, and in their present forms go back to the 18th century. While they can be considered completely independent intellectual pursuits, increasingly scholars[who?] are interested in the areas of critique where the two overlap.[citation needed]

To use an epistemology distinction introduced by Jürgen Habermas in Erkenntnis und Interesse [1968] (Knowledge and Human Interests), critical theory in literary studies is ultimately a form of hermeneutics; i.e., knowledge via interpretation to understand the meaning of human texts and symbolic expressions—including the interpretation of texts which themselves interpret other texts. Critical social theory is, in contrast, a form of self-reflective knowledge involving both understanding and theoretical explanation which aims to reduce entrapment in systems of domination or dependence.

From this perspective, much literary critical theory, since it is focused on interpretation and explanation rather than on social transformation, would be regarded as positivistic or traditional rather than critical theory in the Kantian or Marxian sense. Critical theory in literature and the humanities in general does not necessarily involve a normativedimension, whereas critical social theory does, either through criticizing society from some general theory of values, norms, or "oughts," or through criticizing it in terms of its own espoused values.[citation needed]

In social theory[edit]

Critical theory was first defined by Max Horkheimer of the Frankfurt School of sociology in his 1937 essay Traditional and Critical Theory: Critical theory is a social theory oriented toward critiquing and changing society as a whole, in contrast to traditional theory oriented only to understanding or explaining it. Horkheimer wanted to distinguish critical theory as a radical, emancipatory form of Marxian theory, critiquing both the model of science put forward by logical positivism and what he and his colleagues saw as the covert positivism and authoritarianism of orthodox Marxism and Communism.

Core concepts are: (1) That critical social theory should be directed at the totality of society in its historical specificity (i.e. how it came to be configured at a specific point in time), and (2) That critical theory should improve understanding of society by integrating all the major social sciences, including geographyeconomicssociologyhistorypolitical scienceanthropology, and psychology.

This version of "critical" theory derives from Kant's (18th-century) and Marx's (19th-century) use of the term "critique", as in Kant's Critique of Pure Reason and Marx's concept that his work Das Kapital (Capital) forms a "critique of political economy." For Kant's transcendental idealism, "critique" means examining and establishing the limits of the validity of a faculty, type, or body of knowledge, especially through accounting for the limitations imposed by the fundamental, irreducible concepts in use in that knowledge system.

Kant's notion of critique has been associated with the disestablishment of false, unprovable, or dogmatic philosophical, social, and political beliefs, because Kant's critique of reason involved the critique of dogmatic theological and metaphysical ideas and was intertwined with the enhancement of ethical autonomy and the Enlightenment critique of superstition and irrational authority. Ignored by many in "critical realist" circles, however, is that Kant's immediate impetus for writing his "Critique of Pure Reason" was to address problems raised by David Hume's skeptical empiricism which, in attacking metaphysics, employed reason and logic to argue against the knowability of the world and common notions of causation. Kant, by contrast, pushed the employment of a priori metaphysical claims as requisite, for if anything is to be said to be knowable, it would have to be established upon abstractions distinct from perceivable phenomena.

Marx explicitly developed the notion of critique into the critique of ideology and linked it with the practice of social revolution, as stated in the famous 11th of his Theses on Feuerbach, "The philosophers have only interpreted the world, in various ways; the point is to change it."[6]

One of the distinguishing characteristics of critical theory, as Adorno and Horkheimer elaborated in their Dialectic of Enlightenment (1947), is a certain ambivalence concerning the ultimate source or foundation of social domination, an ambivalence which gave rise to the “pessimism” of the new critical theory over the possibility of human emancipation and freedom.[7] This ambivalence was rooted, of course, in the historical circumstances in which the work was originally produced, in particular, the rise of National Socialismstate capitalism, andmass culture as entirely new forms of social domination that could not be adequately explained within the terms of traditional Marxist sociology.[8]

For Adorno and Horkheimer, state intervention in economy had effectively abolished the tension between the "relations of production" and "material productive forces of society," a tension which, according to traditional critical theory, constituted the primary contradiction within capitalism. The market (as an "unconscious" mechanism for the distribution of goods) and private property had been replaced bycentralized planning and socialized ownership of the means of production.[9]

Yet, contrary to Marx’s famous prediction in the Preface to a Contribution to the Critique of Political Economy, this shift did not lead to "an era of social revolution," but rather tofascism and totalitarianism. As such, critical theory was left, in Jürgen Habermas’ words, without "anything in reserve to which it might appeal; and when the forces of production enter into a baneful symbiosis with the relations of production that they were supposed to blow wide open, there is no longer any dynamism upon which critique could base its hope."[10] For Adorno and Horkheimer, this posed the problem of how to account for the apparent persistence of domination in the absence of the very contradiction that, according to traditional critical theory, was the source of domination itself.

In the 1960s, Jürgen Habermas raised the epistemological discussion to a new level in his Knowledge and Human Interests, by identifying critical knowledge as based on principles that differentiated it either from the natural sciences or the humanities, through its orientation to self-reflection and emancipation. Though unsatisfied with Adorno and Horkeimer's thought presented in Dialectic of Enlightenment, Habermas shares the view that, in the form of instrumental rationality, the era of modernity marks a move away from the liberation of enlightenment and toward a new form of enslavement.[11]

His ideas regarding the relationship between modernity and rationalization are in this sense strongly influenced by Max Weber. Habermas dissolved further the elements of critical theory derived from Hegelian German Idealism, though his thought remains broadly Marxist in its epistemological approach. Perhaps his two most influential ideas are the concepts of the public sphere and communicative action; the latter arriving partly as a reaction to new post-structural or so-called "post-modern" challenges to the discourse of modernity. Habermas engaged in regular correspondence with Richard Rorty and a strong sense of philosophical pragmatism may be felt in his theory; thought which frequently traverses the boundaries between sociology and philosophy.

Postmodern critical theory[edit]

While modernist critical theory (as described above) concerns itself with “forms of authority and injustice that accompanied the evolution of industrial and corporate capitalism as a political-economic system,” postmodern critical theory politicizes social problems “by situating them in historical and cultural contexts, to implicate themselves in the process of collecting and analyzing data, and to relativize their findings”.[12] Meaning itself is seen as unstable due to the rapid transformation in social structures. As a result, the focus of research is centered on local manifestations, rather than broad generalizations.

Postmodern critical research is also characterized by the crisis of representation, which rejects the idea that a researcher’s work is an “objective depiction of a stable other.” Instead, many postmodern scholars have adopted “alternatives that encourage reflection about the ‘politics and poetics’ of their work. In these accounts, the embodied, collaborative, dialogic, and improvisational aspects of qualitative research are clarified”.[13]

The term "critical theory" is often appropriated when an author (perhaps most notably Michel Foucault) works within sociological terms, yet attacks the social or human sciences (thus attempting to remain "outside" those frames of inquiry).

Jean Baudrillard has also been described as a critical theorist to the extent that he was an unconventional and critical sociologist; this appropriation is similarly casual, holding little or no relation to the Frankfurt School.

Language and construction[edit]

The two points at which there is the greatest overlap or mutual impingement of the two versions of critical theory are in their interrelated foci on language, symbolism, and communication and in their focus on social construction.

Language and communication[edit]

From the 1960s and 1970s onward, language, symbolism, text, and meaning came to be seen as the theoretical foundation for the humanities, through the influence of Ludwig WittgensteinFerdinand de SaussureGeorge Herbert MeadNoam ChomskyHans-Georg GadamerRoland BarthesJacques Derrida and other thinkers in linguistic and analytic philosophy, structural linguisticssymbolic interactionismhermeneuticssemiology, linguistically oriented psychoanalysis (Jacques LacanAlfred Lorenzer), anddeconstruction.

When, in the 1970s and 1980s, Jürgen Habermas redefined critical social theory as a theory of communication, i.e. communicative competence and communicative rationality on the one hand, distorted communication on the other, the two versions of critical theory began to overlap to a much greater degree than before.

Construction[edit]

Both versions of critical theory have focused on the processes by which human communication, culture, and political consciousness are created. This includes:

There is a common interest in the processes (often of a linguistic or symbolic kind) that give rise to observable phenomena and here there is some mutual influence among the different versions of critical theory. Ultimately, this emphasis on production and construction goes back to the revolution in philosophy wrought by Kant, namely his focus in theCritique of Pure Reason on synthesis according to rules as the fundamental activity of the mind that creates the order of our experience.

21st Century[edit]

Since 2010 the Birkbeck Institute for the Humanities has organized annually the London Critical Theory Summer School; announced participants for the 2015 event includeEtienne BalibarWendy BrownDavid HarveyJacqueline Rose and Slavoj Žižek.[14]

Critical theory is a type of social theory oriented toward critiquing and changing society as a whole, in contrast to traditional theory oriented only to understanding or explaining it.Critical theories aim to dig beneath the surface of social life and uncover the assumptions that keep us from a full and true understanding of how the world works. It was developed by a group of sociologists at the University of Frankfurt in Germany who referred to themselves as The Frankfurt School, including Jürgen Habermas, Herbert Marcuse, Walter Benjamin, Max Horkheimer, and Theodor Adorno.

Two core concepts of critical theory are that it should be directed at the totality of society in its historical specificity (how it came to be at a specific point in time) and that it should improve the understanding of society by integrating all the major social sciences, including geography, economics, sociology, history, political science, anthropology, and psychology.

According to Max Horkheimer, Director of the Frankfurt School's Institute for Social Research, a critical theory is adequate only if it meets three criteria: it must be explanatory, practical, and normative, all at the same time. That is, it must explain what is wrong with current social reality, identify the actors to change it, and provide both clear norms for criticism and achievable practical goals for social transformation.

5. Body culture studies describe and compare bodily practice in the larger context of culture and society, i.e. in the tradition ofanthropologyhistory and sociology. As body culture studies analyse culture and society in terms of human bodily practices, they are sometimes viewed as a form of materialist phenomenology. The significance of the body and of body culture (in GermanKörperkultur, in Danish kropskultur) was discovered since the early twentieth century by several historians and sociologists. During the 1980s, a particular school of Body Culture Studies spread, in connection with – and critically related to – sports studies. Body Culture Studies were especially established at Danish universities and academies and cooperated with Nordic, European and East Asian research networks.

Body culture studies include studies of dance, play (play (activity)) and game, outdoor activities, festivities and other forms of movement culture. The field of body culture studies is floating towards studies of medical cultures, of working habits, of gender and sexual cultures, of fashion and body decoration, of popular festivity and more generally towards popular culture studies.

Body Culture Studies have shown useful by making the study of sport enter into broader historical and sociological discussion – from the level of subjectivity to civil society, state and market.

Earlier studies in body and culture[edit]

Since early 20th century, sociologists and philosophers had discovered the significance of the body, especially Norbert Elias, the Frankfurt School, and some phenomenologists. Later, Michel FoucaultPierre Bourdieu and the Stuttgart Historical Behaviour Studies delivered important inspirations for the new body culture studies.

The sociologist Norbert Elias (1939) wrote the first sociology, which placed the body and bodily practice in its centre, describing the change of table manners, shame and violence from the Middle Ages to Early Modern court society as a process of civilisation. Later, Elias (1989) studied the culture of duel in Wilhelminian Prussia, throwing light on particular traits of the German sonderweg. Elias’ figurational sociology of the body became productive especially in the field of sport studies (Elias/ Dunning 1986; Eric Dunning et al. 2004). His concept of the "process of civilisation" received, however, also critique from the side of comparative anthropology of bodily practices (Duerr 1988/2005).

The Frankfurt School of Critical Theory turned towards the body with Marxist and Freudian perspectives. Max Horkheimer and Theodor W. Adorno (1947) described the Western “dialectics of enlightenment” as including an underground history of the body. Body history lead from the living body to the dead body becoming a commodity under capitalism. A younger generation of the Frankfurt School launched the Neo-Marxist sports critique (Rigauer 1969) and developed alternative approaches to movement studies and movement culture (Lippe 1974; Moegling 1988). Historical studies about the body in industrial work (Rabinbach 1992), in transportation (Schivelbusch 1977), and in Fascist aesthetics (Theweleit 1977) as well as in the philosophy of space (Peter Sloterdijk 1998/ 2004) had their roots in this critical approach.

Philosophical phenomenology (→Phenomenology (philosophy)) paid attention to the body, too. Helmuth Plessner (1941) studied laughter and weeping as fundamental human expressions. Maurice Merleau-Ponty (1945) placed the body in the centre of human existence, as a way of experiencing the world, challenging the traditional body-mind dualism of René DescartesGaston Bachelard (1938) approached bodily existence via a phenomenology of the elements and of space, starting by “psychoanalysis of fire”.

Based on phenomenological traditions, Michel Foucault (1975) studied the configurations of knowledge in the post-1800 society, launching the concept of modern panoptical control (→Panopticon). The body appeared as object of military discipline and of the panopticon as a mechanism of “the biopolitics of power”. Foucault’s approach became especially influential for studies in sport, space, and architecture (Vertinsky/ Bale 2004) as well as for studies in the discipline of gymnastic and sport (Vigarello 1978; Barreau/ Morne 1984; Vertinsky/ McKay 2004).

While Foucault’s studies focused on top-down strategies of power, Pierre Bourdieu directed his attention more towards bottom-up processes of social-bodily practice. For analysing the class aspect of the body, Bourdieu (1966/67) developed the influential concept of habitus as an incorporated pattern becoming social practice by diverse forms of taste, distinction and display of the body. Some of Bourdieu’s disciples applied these concepts to the study of sports and gymnastics (Defrance 1987).

In Germany, influences of phenomenology induced body culture studies in the historical field. The Stuttgart school of Historical Behaviour Studies focused from 1971 on gesturesand laughter, martial arts, sport and dance to analyze changes of society and differences between European and non-European cultures (Nitschke 1975, 1981, 1987, 1989, 2009; Henning Eichberg 1978).

These approaches met with tendencies of the late 1970s and 1980s, when humanities and sociology developed a new and broader interest in the body. Sociologists, historians, philosophers and anthropologists, scholars from sport studies and from medical studies met in talking about “the return of the body” or its “reappearance” (Kamper/ Wulf 1982). The new interest towards the body was soon followed up by the term “body culture” itself.

The word and concept of “body culture” – alternative practice[edit]

The word “body culture” appeared for first time around 1900, but at that time signifying a certain form of physical practice. The so-called “life reform” (German Lebensreform) aimed at the reform of clothing and of nurture and favoured new bodily activities, which constituted a new sector side by side with established gymnastics and sport. The main fields of this third sector of movement culture were nudism, rhythmic-expressive gymnastics, yoga and body building (Wedemeyer-Kolwe 2004) as well as a new type of youth wandering. Though highly diverse, they found a comprehensive term in the German word Körperkultur, in English physical culture (→physical education), in French culture physique, and in Danish kropskultur. Inspirations from the movement of body culture gave birth to early studies in the history of bodily positions and movements (Gaulhofer 1930;Marcel Mauss 1934).

In German Socialist workers’ sport, the concept of Körperkultur had a prominent place. The concept also entered into Russian Socialism where fiskultura became an alternative tobourgeois sport, uniting the revolutionary fractions of more aesthetical Proletkult and more health-oriented "hygienism" (Riordan 1977). Later, Stalinism forced the contradictory terms under the formula “sport and body culture”. This continued in the Soviet bloc after 1945. When the 1968 student movement revived Marxism, the concept of body culture –Körperkultur in West Germany, “somatic culture” in America – re-entered into the sports-critical discourse, but received new analytical dimensions. Quel corps? (Which body?) was the title of a critical review of sports, edited by the French Marxist educationalist Jean-Marie Brohm in 1975-1997. In Germany, a series of books under the title Sport: Kultur, Veränderung (Sport: culture, change) marked the body cultural turn from 1981, with works of Rigauer, Elias, Eichberg and others.

Body culture studies – a new critical school[edit]

In Denmark, a particular school of Body Culture Studies – kropskultur – developed since around 1980 in connection with the critique of sport (Korsgaard 1982; Eichberg 1998; Vestergård 2003; Nielsen 1993 and 2005). It had its background in Danish popular gymnastics and in alternative movement practices – outdoor activities, play and game, dance,meditation. In Finland, the concept ruumiinkulttuuri found a similar attention (Sironen 1995; Sparkes/ Silvennoinen 1999).

In international cooperation, “body anthropology” became the keyword for French, Danish and German philosophers, sociologists and educationalists who founded the Institut International d’Anthropologie Corporelle (IIAC) in 1987. They undertook case studies in traditional games as well as in “scenes” of new urban body cultures (Barreau/ Morne 1984; Barreau/ Jaouen 1998; Dietrich 2001 and 2002).

Body culture studies found a particular interest in East Asian countries. In Japan, the sociologist Satoshi Shimizu from the University of Tsukuba established in 2002 a Centre for the Study of Body Culture, publishing the review Gendai Sports Hyôron (Contemporary Sport Critique, in Japanese, since 1999). In Taiwan, Hsu I-hsiung from the National Taiwan Normal University founded in 2003 the Taiwan Body Culture Society (Taiwan shenti wenhua xiehui), publishing the reviews Sport Studies (in Chinese, since 2007) andBody Culture Journal (in Chinese, since 2005). And in Korea, Jong Young Lee from the University of Suwon published since 2004 the International Journal of Eastern Sport & Physical Education, focusing on body culture and traditional games.

These initiatives were connected with each other both by contents and by personal networks. In the English and American world, Allen Guttmann (1978, 1996, 2004), John Hoberman (1984), John Bale (1996, 2002, 2004), Susan Brownell (1995, 2008) and Patricia Vertinsky (2004) contributed by opening the history, sociology and geography of sports towards body culture studies.

While the concept of body culture earlier had denoted an alternative practice and was used in singular, it became now an analytical category describing body cultures in plural. The terms of physical culture (or physical education) and body culture separated – the first describing a practice, the second a subject of theoretical analysis.

Questioning the “individual” body[edit]

Studies in body culture have shown that bodily existence is more than just “the body” as being an individual skin bag under control of an individual mind. Bodily practice happens between the different bodies. This questions current types of thinking “the individual”: the epistemological individualism and the thesis of ‘late-modern individualization’.

The methodological habit of counter-posing “the individual” and “the society” is largely disseminated in sociology. It was fundamentally criticized by Norbert Elias who underlined that there was no meaning in the separation between the individual as a sort of core of human existence and the society as a secondary environment around this core. Society was inside the human body. In contrast, the epistemological solipsism treated human existence as if the human being was alone in the world – and was only in a secondary process “socialized” (Peter Sloterdijk 1998 vol. 1).

Another current assumption is the historical-sociological individualism. Sociologists as Ulrich Beck and Anthony Giddens have postulated that individualization during “high” or “late modernity” had replaced all earlier traditions – religion, nation, class – and left “the individual” alone with its body. The body, thus, got a central position as the only fix-point of “self-identity” left after the dissolution of the traditional norms. The individual chooses and makes its own body as a sort of “gesamtkunstwerk Ego”.

Body-cultural studies have challenged this assumption (Henning Eichberg 2010: 58-79). They throw light on inter-bodily relations, within which the human individuality has a much more complex position.

Social time[edit]

An important aspect of body culture is temporal. Modern society is characterized by the significance of speed and acceleration. Sport, giving priority to competitive running and racing, is central among the phenomena illustrating the specifically modern velocity (Eichberg 1978, Bale 2004). The historical change from the circulating stroll in aristocratic and early bourgeois culture to modern jogging as well as the changes from coach traffic via the railway (Schivelbusch 1977) to the sport race of automobiles (→auto racing) (Sachs 1984) produced new body-cultural configurations of social time.

On the basis of transportation and urbanismblitzkrieg and sports, the French architect and cultural theorist Paul Virilio (1977) launched the terms of “dromology” (i.e. science of racing) and “dromocracy” (power or dominance of velocity) to describe the knowledge and the politics of modern social acceleration. But the concept of social time embraces many more differentiations, which can be explored by comparing time-dynamic movements of different ethnic cultures (Hall 1984).

Social space[edit]

Another important aspect of body culture is spatial. Bodily display and movement always create space – physical space as socio-psychical space and vice versa. Bodily activities have during history changed between indoor or outdoor milieus, between non-specialized environment, specialized facilities (→sports facilities) and bodily opposition against existing standardized facilities or what was called “sport scape”. In movement, straight lines and the culture of the streamline were confronted by mazes and labyrinthinestructures, by patterns of fractal geometry. All these patterns are not just spatial-practical arrangements, but they play together with societal orientations. Under this aspect, one has described the history of panoptical control (Foucault 1975; Vertinsky/ Bale 2004), the parcellation of the sportive space, and the hygienic purification of spaces (Augestad 2003). Proxemics (Hall 1966), the study of distance and space, has become a special field of body culture studies.

Body culture studies have also influenced the understanding of “nature”. In the period around 1800, the “nature” of body culture – of outdoor life, naturism and green movements (→green politics) – became a world of liberation and opposition: “Back to nature!” In the course of modern industrial culture, this “other” nature became subjected to colonization and simulation, forming a “second nature”. It even became a virtual world, which is simulating people’s senses as a “third nature”. The study of body culture contributed to a history of cultural ecology (Eichberg 1988).

Body cultural studies also contributed to a differentiation between what in everyday language often is confused as ‘space’ and ‘place’ whose dialectics were shown by the Chinese-American philosopher Yi-Fu Tuan (see Bale 2004). Space can be described by coordinates and by certain choreographies. Spatial structures can be standardized and transferred from place to place, which is the case with the standardized facilities of sports. Place, in contrast, is unique – it is only here or there. Locality is related to identity. People play in a certain place – and create the place by play and game. The place plays with the people, as a co-player.

Civilisation, discipline, modernity[edit]

Studies of body culture enriched the analysis of historical change by conflicting terms. Norbert Elias (1986) studied sport in order to throw light on the civilizing process (→The Civilizing Process). In sports, he saw a line going from original violence to civilized interlacement and pacification. Though there were undertones of hope, Elias tried to avoidevolutionism, which since the nineteenth century postulated a ‘progressive’ way from ‘primitive’ to ‘civilized’ patterns.

While the concept of civilization normally had hopeful undertones, discipline had more critical undertones. Cultures of bodily discipline became visible – following Foucault and theFrankfurt School – in Baroque dance (Lippe 1974), in aristocratic and bourgeois pedagogy of the spinal column during the eighteenth and nineteenth centuries (Vigarello 1978), and in hygienic strategies, school sanitation and school gymnastics during the twentieth century (Augestad 2003). Military exercise (→military drill) in Early Modern times was the classic field for body cultural discipline (Gaulhofer 1930; Kleinschmidt 1989).

In the field of sports, a central point of body-cultural dispute has been the question whether sport had its roots in Ancient Greek competitions of the Olympic type or whether it was fundamentally linked to modernity. While nineteenth century’s Neo-Humanism, Classicism and Olympism assumed the ancient roots of sport, body cultural studies showed that the patterns central to modern sports – quantification, rationalisation, principle of achievement – could not be dated before the industrial culture of the eighteenth and nineteenth centuries (Eichberg 1978; Guttmann 1978). What was practiced before, were popular games, noble exercises, festivities of different character, children’s games and competitions, but not sport in modern understanding. The emergence of modern sport was an eruptive innovation rather than a logical prolongation of earlier practices. As a revolution of body culture, this transformation contributed to a deeper understanding of the Industrial Revolution. The so-called Eichberg-Mandell-Guttmann theory about the uniqueness of modern sport became, however, a matter of controversies and was opposed by other historians (Carter/Krüger 1990).

What came out of the controversies between the concepts of modernity, evolution, civilization, discipline and revolution was that “modernization” only can be thought as a non-lineal change with nuances and full of contradictions. This is how the history of sport (Nielsen 1993 and 2005) and of gymnastics (Defrance 1987; Vestergård Madsen 2003) as well as the history of running (Bale 2004) have been described in body-cultural terms.

One of the visible and at the same time deeper changes in relation to the modern body concerns the dress reform and the appearance of the naked body, especially in the years between 1900 and the 1920s. The change from noble pale skin to suntanned skin as a ‘sportive’ distinction was not only linked to sport, but had a strong impact on society as a whole. The change of appreciated body colour reversed the social-bodily distinctions between people and classes fundamentally, and nudism became a radical expression of this body-cultural change.

Industrial body and production[edit]

Body culture studies have cast new light on the origins and conditions of the Industrial Revolution, which in the eighteenth and nineteenth centuries transformed people’s everyday life in a fundamental way. The traditional common-sense explanations of industrialization by technology and economy as ‘driving forces’ have shown as insufficient. Economic interests and technological change had their basic conditions in human social-bodily practice. The history of sport and games in body cultural perspective showed that this practice was changing one or two generations, before the Industrial Revolution as a technological and economic transformation took place. What had been carnival-like festivities, tournaments and popular games before, became modern sport by a new focus on results, measuring and quantifying records (Eichberg 1978; Guttmann 1978). Under the aspect of the principle of achievement, there was no sport in ancient Egypt, in ancient Greece, among the Aztecs or Vikings, and in European Middle Ages, though there were games, competitions and festivities. Sport as a new type of body culture resulted from societal changes in the eighteenth-nineteenth centuries.

The genesis of sport in connection with industrial productivity called to attention the historical-cultural relativity of “production” (→Manufacturing) itself. Studies in the history of “the human motor” and the “mortal engines” of sport showed reification (→reification (Marxism)) and technology as lines of historical dynamics (Rigauer 1969; Vigarello 1988; Rabinbach 1992; Hoberman 1992). Production became apparent not as a universal concept, but as something historically specific – and sport was its body-cultural ritual.

Trialectics of body culture[edit]

Body culture as a field of contradictions demands a dialectical approach, but it is not dualistic in character. Body culture studies have revealed trialectical relations inside the world of sports (Eichberg 1998, 2010; Bale 1996, 2002 and 2004).

The hegemonic model of Western modern body culture is achievement sport, translating movement into records. Sportive competition follows the logic of productivity by bodily strain and forms a ranking pyramid with elite sports placed at the top and the losers at the bottom. Through sportive movement, people display a theatre of production.

A contrasting model within modern body culture is delivered by mass sports. In gymnastics and fitness training, the body is disciplined by subjecting it to certain rules of “scientific”, social geometrical or aesthetic order (Roubal 2007). By rhythmic repetition and formal homogenization, the individual bodies are integrated into a larger whole, which is recommended in terms of reproduction (→reproduction (economics)), as being healthy and educative. Through fitness sport, people absolve a ritual of reproductive correctness and integration.

A third model is present in popular festivity, dance and play. In carnival and folk sport, people meet people by festive movement. This type of gathering may give life to the top-down arrangements of both productive achievement sport and reproductive fitness sport, too. But the body experience of popular festivity, dance, play and game is a-productive in itself – it celebrates relation in movement.

Practices of sport in their diversity and their historical change, thus, clarify inner contradictions inside social life more generally – among these the contradictions between state, market and civil society. The trialectics of body culture throw light on the complexity of societal relations.

Body cultures in plural[edit]

“Culture” in singular is an abstraction. The study of body culture is always a study of body cultures in plural. Body cultures show human life in variety and differences, assimilation and distinction, conflicts and contradictions. This demands a comparative approach to otherness, and this is the way several studies in body culture have gone.

Culture was studied as cultures already by the school of Cultural Relativism in American anthropology (American Anthropological Association) in the 1930s (Ruth Benedict).Postcolonial studies have taken this pluralistic perspective up again (Bale 1996 and 2004; Brownell 1995; Azoy 2003; Leseth 2004). The discourse in singular about “the body in our society” became problematic when confronted with body cultures in conflict and tension.

The plurality and diversity of body cultures is, however, not only a matter of outward relations. There are also body cultures in plural inside a given society. The study of different class habitus (→class culture), youth cultures, gender cultures (→gender identity) etc. opened up for deeper insights into the differentiation of civil society.

Configurational analysis[edit]

Body culture studies try to understand bodily practice as patterns revealing the inner tensions and contradictions of a given society. In order to analyze these connections, the study of body culture has turned attention to the configurations of movement in time and space, the energy of movement, its interpersonal relations and objectification (→Configurational analysis (Konfigurationsanalyse)). Above this basis, people build a superstructure of institutions and ideas, organising and reflecting body culture in relation to collective actions and interests (Eichberg 1978; Dietrich 2001: 10-32; see keyword 2).

By elaborating the complex interplay between bodily practice and the superstructures of ideas and conscience, body cultures studies challenge the established history and sociology.

6. What contribution can cultural theory make to our understanding of ‘myth’ in popular culture?

In modern society, myth is often regarded as historical or obsolete. Many scholars in the field of cultural studies are now beginning to research the idea that myth has worked itself into modern discourses. Modern formats of communication allow for widespread communication across the globe, thus enabling mythological discourse and exchange among greater audiences than ever before. Various elements of myth can now be found in television, cinema and video games. Although myth was traditionally transmitted through the oral tradition on a small scale, the technology of the film industry has enabled filmmakers to transmit myths to large audiences via film dissemination.

In the psychology of Carl Jung, myths are the expression of a culture or society’s goals, fears, ambitions and dreams. Film is ultimately an expression of the society in which it was credited, and reflects the norms and ideals of the time and location in which it is created. In this sense, film is simply the evolution of myth. The technological aspect of film changes the way the myth is distributed, but the core idea of the myth is the same. The basis of modern storytelling in both cinema and television lies deeply rooted in the mythological tradition. Many contemporary and technologically advanced movies often rely on ancient myths to construct narratives. The Disney Corporation is notorious among cultural study scholars for “reinventing” traditional childhood myths.

With the invention of modern myths such as urban legends, the mythological traditional will carry on to the increasing variety of mediums available in the 21st century and beyond. The crucial idea is that myth is not simply a collection of stories permanently fixed to a particular time and place in history, but an ongoing social practice within every society.

The so-called culturalist reading of the term developed by both Thompson and Williams was subsequently challenged by other more obviously structuralist interpretations. These emphasized the external symbolic structures of culture, as embodied in cultural languages and codes, rather than its lived forms. In this formulation culture could be read as a signifying system through which the social world was mapped.

Robson:

MYTH

The presence of mystic thought in modernity (Eliade).

Definitions of myth:

Campbell: myth as

Mircea Eliade (1907-86). Continuations of mythical thinking into the 20th century. The influence of Christianity had great impact on the Western communities.

Cults – little groups who believed they were right.

There is a secret knowledge which will set us free. These ideas (Gnosticism) were never removed from Christianity.

MYTHS OF ORIGIN

The nation is one of the modern expressions of modernity. What tribes are we derived from? The current order attains an aura of sacredness (night, costumes, torture, drama). Noble origins and 19th century nationalism. Loss of purity, we were rejected from paradise.

In culture and media

Jung – concepts included:

1. The Archetype

2. The Collective Unconscious – core of deep, universal archetypes in minds, the DNA of human psyche.

Jung hardly appeared in cultural theory. Freud was popular in 1950s and 1960s. Films – Freudian, the idea of ego. Emphasis on Oedipal dynamics – melodrama, horror, thriller, film noir.

The “Jungian” naughties? – Harry Potter. Ch. Booker – 7 basic narratives after Jung:

“The Lord of the Rings” series covers all of these.

Universal monomyth (Campbell) – storytelling referring to everybody.

H. Jenkins “Convergence Culture” – we’re already familiar with the structure of films, we’ve seen many of them. Everybody’s familiarity with formula – today films are not exciting anymore, they’re just fun to watch. Jenkins promotes a limited return to the concept of myth – parallels between canonical texts as “The Odyssey” and contemporary “transmedia storytelling” in “Matrix”.

Escape from time and place

7. Point out the differences between discourse analysis and critical discourse analysis as two approaches to language communication research.

Discourse analysis – it is a way of approaching a problem or a situation through 'deconstructing' the text. Discourse Analysis can be characterized as a way of approaching and thinking about a problem. In this sense, Discourse Analysis is neither a qualitative nor a quantitative research method, but a manner of questioning the basic assumptions of quantitative and qualitative research methods. Discourse Analysis enables to reveal the hidden motivations behind a text or behind the choice of a particular method of research to interpret that text. Discourse Analysis will, thus, not provide absolute answers to a specific problem, but enable us to understand the conditions behind a specific "problem" and make us realize that the essence of that "problem", and its resolution, lie in its assumptions; the very assumptions that enable the existence of that "problem". 
Van Dijk perceives discourse analysis as ideology analysis. The discourse analysis is primarily text based – syntax, lexicon, local semantics, topics, schematic structures. It identifies the rules which make a text into e.g. fascist text; in the same way as grammar characterizes the structure of sentences, discourse rules characterize utterances/texts that are acceptable within a certain practice.

Discourse studies – the systematic and explicit analysis of the various structures & strategies of different levels of text and talk. Disciplines of discourse studies:

DA distinguishes between a more content-oriented step of (1) structure analysis and a more linguistically oriented step of (2) fine analysis. Within structure analysis, a characterization of the media and the general themes has to be made. Within the fine analysis, DA focuses upon context, text surface and rhetorical means. Exemplary linguistic instruments are figurativeness, vocabulary and

argumentation types. DA takes into account both qualitative and quantitative aspects of these features. DA analyses:

• the kind and form of argumentation

• certain argumentation strategies

• the intrinsic logic and composition of texts

• somehow implicit implications and insinuations

• the collective symbolism or ‘figurativeness’,symbolism, metaphorism and so on,

both in language and in graphic contexts (statistics, photographs, pictures, caricatures, etc.)

• idioms, sayings, clichés, vocabulary and style

• actors (persons, pronominal structure)

• references, for example, to (the) science(s)

• the particulars of the sources of knowledge, etc.

Critical discourse analysis - it is a type of discourse analytical research that primarily studies the way social power abuse, dominance and inequality are enacted, reproduced and resisted by text and talk in the social and political context. It is not a research method but a discipline, an approach. It uses tools from different studies; it involves linguistics, poetics, semiotics, pragmatics, psychology, sociology, anthropology, history, and communication research.

The significant difference between Discourse analysis and CDA lies in the constitutive problem-oriented, interdisciplinary approach of the latter, apart from endorsing all of the above points. CDA is therefore not interested in investigating a linguistic unit but in studying social phenomena which are necessarily complex and thus require a multidisciplinary approach. The objects under investigation do not have to be related to negative or exceptionally ‘serious’ social or political experiences or events – this is a frequent misunderstanding of the aims and goals of CDA and of the term 'critical’ which, of course, does not mean ‘negative’ as in common-sense usage. Any social phenomenon lends itself to critical investigation, to be challenged and not taken for granted.

It was increasingly emphasized in the 1980s and 1990s that discourse analysis should also have a critical dimension. That is, in the choice of its orientation, topics, problems, issues and methods, discourse analysis should actively participate, in its own academic way, in social debates, and do research that would serve those who need it most, rather than those who can pay most.

9. What are the basic tenets of critical discourse analysis as an approach to the study of language communication?

Fairclough & Wodak summarize the main tenets of CDA:

1.CDA addresses social problems and political issues 
2.Power relations are discursive 
3.Discourse constitutes Society and Culture 
4.Discourse does ideological work 
5.Discourse is historical 
6.The link between text and society is mediated 
7.Discourse analysis is interpretative and explanatory 
8.Discourse is a form of social action

Critical research on discourse needs to satisfy a number of requirements in order to effectively realize its aims:

- As is often the case for more marginal research traditions, CDA research has to be 'better' than other research in order to be accepted.

- Rather than to merely describe discourse structures, it tries to explain them in terms of properties of social interaction and especially social structure.

8. Using the framework of critical discourse analysis, explain the relationship of language to power and ideology.

Concerning the term ‘Critical Discourse Analysis’, a variety of approaches towards the social analysis of discourse might be distinguished, among which the most commonly known are those presented by Fairclough and Wodak. Nevertheless, there is an assumption shared by CDA practitioners which states that language and power are undoubtedly linked. Therefore, each of the approaches views language as a form of social practice and focuses on the ways social and political domination are reproduced in text and talk.

In addition to the linguistic theory of Critical Discourse Analysis, the approach draws from social theory in order to examine ideologies and power relations involved in discourse. It might be noted that language connects with the social theorythrough being the primary domain of ideology, and through being a site of, and a stake in, struggles for power.

All in all, Critical Discourse Analysis illustrates the problems generated by the relationship between ideology and power. It has to be stated that all of our words are used to convey a broad sense of meanings, and the meaning we convey with those words is identified by our immediate social, political, and historical conditions, that is why, our words are never neutral, because they carry the power that reflects the interests of those who speak either opinion leaders, courts, government or editors.

In each case, a number of rules and patterns, such as what is talked about and how, might be observed. The words of those in power are taken as self-evident truths and the words of those not in powerare dismissed as irrelevant, inappropriate, or without substance. Critical approach to discourse analysis attempts to link the text (micro level) with the underlying power structures in society (macro level) through discursive practices upon which the text is drawn. That is a text, a description of something happening in a larger social context, consisting ofa complex set of power relations, is interpreted and acted upon by readers or listeners depending on their rules, norms, and mental models of socially acceptable behavior and background knowledge.

Discussing this topic in detail, it might be observed that concerning the concept of power, which is maintained through language, relates to the control how texts are produced, distributed and consumed in particular socio-cultural contexts. What is more, the other term has to be mentioned here, i.e. ‘dominance’, which according to Van Dijk, is the exercise of social power by elites, institutions or groups that results in social inequality, including political, cultural, class, ethnic, racial and gender inequality. That is why, critical discourse analysis want to know what structures, strategies or other properties of text, talk, verbal interaction or communicative events play a role in these modes of reproduction.

Taking into account ideology as a system of ideas which constitutes and controls the large power blocks of our society and language considered as a medium of ideological force, and the one that legitimizes relations of organized power, it might be observed that language is a material form of ideology, and simultaneously language is invested by ideology. Nevertheless, it has to be noted that ideologies reside in texts but it is not possible ‘read off’ ideologies from texts because meanings are produced through interpretations of texts and texts are open to diverse interpretations. Therefore, a language ideology can be a ‘correct’ conceptualization of language or it can dissent from the facts, and be a fallacious interpretation of language. Ideologies form the basis of the belief systems or social representations of specific groups. Moreover, ideologies as special forms of social cognition shared by social groups form the basis of the social representations and practices of group members, including their discourse, which at the same time serves as the means of ideological production, reproduction and challenge.

What might be mentioned is also the fact that nowadays, probably the most important social institution in bringing off the processes of transmitting ideologies, the meanings and values is the mass media. Some discourse genres, such as newspapers and political propaganda have the explicit aim of teaching ideologies to group members and newcomers. Thus ideologies are not innate, but learnt, and precisely the content and form of such discourse may be more or less likely to form intended mental models of social events, which finally may be generalized and abstracted to social representations and ideologies.

  1. Critical Discourse Analysis variety of approaches towards the social analysis of discourse

  1. CDA – not only linguistic theory:

  1. CDA - relationship between ideology and power:

  1. Power of language:

  1. Ideology & language:

* form the basis of the belief systems or social representations of specific groups

* as special forms of social cognition shared by social groups form the basis of the social representations and practices of group members, including their discourse, which at the same time serves as the means of ideological production, reproduction and challenge

9. What are the basic tenets of critical discourse analysis as an approach to the study of language communication?

Basic Tenets of Critical Discourse Analysis

Fairclough (one of the founders of critical discourse analysis (CDA) as applied to sociolinguistics) offers five theoretical propositions that frame his approach to CDA.

  1. Discourse (language use) shapes and is shaped by society:

This is viewed as two ways, dialectic relationship - language changes according to the context - situations are altered according to language used –for example, advertising and news can affect attitudes, behaviour, etc.

  1. Discourse helps to constitute (and change) knowledge, social relations and social identity:

The way language is used affects the way the world is represented - nationalism, us and them. An appeal to ‘Back to Basics’ sounds like a good thing, but in many ways masquerades many of the implications of such a move and the underlying philosophy. Anti-Abortionist terming themselves ‘pro-life’ implies that their opponents are ‘anti-life’.

  1. Discourse is shaped by relations of power and invested with ideologies:

An example of this is the way certain languages, accents or dialects are valued or devalued - notion of standards as good is an interpretation that needs to be problematized. Medical language - traditional medicine - technologized - is presented compared with alternative therapies - holds ideological assumptions about what is best, common sense etc. Even the term ‘alternative medicine’ is marginalising in that it implies that ‘non-alternative medicine’ is the norm, rather than one of two options.

  1. The shaping of discourse is a stake in power struggles:

If the previous tenet is correct, then language is a powerful mechanism for social control and, therefore, is contested and contestable.

  1. CDA aims to show how society and discourse shape each other:

Language use is not a neutral phenomenon – it is concerned with developing consciousness of the issue, a precondition for developing new practices and conventions – and thus contributing to social emancipation and social justice.

Fairclough and Wodak offered eight foundational principles for CDA, namely:

In summary, then, CDA can be seen as a highly context-sensitive, democratic approach which takes an ethical stance on social issues with the aim of transforming society - an approach or attitude rather than a step by step method.

CDA is founded on the idea that there is unequal access to linguistic and social resources, resources that are controlled institutionally. It is therefore primarily concerned with institutional discourses – media, policy, gender, labelling etc. A key concept is that of the ‘naturalization’ of particular representations as ‘common sense’ (Fairclough 1989). Something comes to be seen as ‘common sense’ when it, and its implicit assumptions, are no longer seen as questionable, as a simple matter of fact. When a discourse becomes so dominant that alternative interpretations are entirely suppressed or ignored then it ceases to be arbitrary; or, as merely one position, it comes to be viewed as natural, and has legitimacy, simply because that is ‘the way things are’. Thus a naturalised discourse loses its ideological character and appears as neutral – it represents its ‘story’ as the ‘truth’ and implies that the learning of this discourse requires only the learning of a set of skills or techniques, as may be seen in the contemporary approach to teacher education (training?) in the current neo-liberal context. Learning to be a teacher becomes a matter of learning the techniques rather than engaging with questions around the purposes of education and critically considering the most appropriate ways to meet these purposes. An example of this from secondary school education in the 1970s was the naturalisation of a discourse of ‘discipline’ in which corporal punishment was an unquestionably legitimate approach, an issue and associated action that now appears to be highly questionable and contestable. Such issues again have clear links with the notion of hegemony, or holding power through consent and acquiescence.

CDA views text as artefacts that do not occur in isolation – socio-political, socio-historic contexts contribute to production and interpretation of text and are crucial aspects of the analysis. It operates on three levels of analysis – engaging with the text, the discursive practices (processes of production, reception, interpretation) and the wider socio-political and socio-historic context.

10. Wpływ języka angielskiego na współczesny język polski (leksyka, frazeologia, składnia, słowotwórstwo).

1. Wpływ języka angielskiego na język polski – zarys historyczny: Pierwsze zapożyczenia zaczęły pojawiać się drugiej połowie XVII wieku. Język angielski zaczął stawać się modny za sprawą romantyzmu. Przejawiało się to we wpływie angielskich idei romantycznych na język polski (Mickiewicz odwoływał się do idei romantycznych charakteryzujących twórców angielskich Scotta, Byrona), a także w słownictwie angielskiego pochodzenia: dżokej, lord, poncz, toast, wist, frak. Wpływ języka angielskiego na polszczyznę przybrał na sile w XX wieku.

2. Zapożyczenia leksykalne właściwe: to wyrazy zapożyczone z innego języka wraz ze znaczeniem. Mogą one zostać spolszczone pod względem brzmienia i pisowni, a także poddawane są polskim formom odmiany. Przykłady: mecz, laser, komputer

3. Kalki językowe (zapożyczenia strukturalne)- wyrazy zewnętrznie znajome, swojskie, powstałe jednak na wzór słów obcych. Wierne odwzorowania obcych konstrukcji językowych, tłumaczenia ich na język polski.

4. Zapożyczenia semantyczne które polegają na przejęciu dodatkowego znaczenia wyrazu do znaczenia już znanego, np. korespondować, które ma znaczenie ‘prowadzić korespondencję’ zyskało nowe znaczenie ‘odpowiadać czemuś’ na podstawie angielskiego słowa correspond

5. Zapożyczenia sztuczne słowa tworzone w danym języku na bazie części (morfemów) innych języków, np: telewizja (television).

Do zapożyczeń sztucznych należą hybrydy w których jeden element pochodzi z jezyka obcego. Przykłady: minikosmetyczka, radiosłuchacz. Zapożyczenia sztuczne mają ogromny wpływ na słowotwórstwo w języku polskim. Przedrostki takie jak anty-, auto-, ekstra-, mikro-, mini-, super- tworzą nowe słowa w połączeniu z polskim tematem słowotwórczym, np. superzabawa, ekstraniespodzianka, etc. Kolejny wpływ zapożyczeń sztucznych na słowotwórstwo to na przykład przyrostek –er, (szpaner, blokers), oraz morfem –gate, który używany jest w przypadku afer politycznych, np., Rywingate, Orlengate.

6. Fałszywi przyjaciele - para słów lub wyrażeń brzmiących w dwóch językach tak samo lub podobnie, ale mających inne znaczenia, np.: actually ( prawidłowe znaczenie: ‘właściwie’, znaczenie mylące: ‘aktualnie’).

4. Wpływ na gramatykę/składnię

- Dla języka angielskiego w pewnych wyjątkach dozwolona jest konstrukcja w której przymiotnik występuje po rzeczowniku który on określa, w języku polskim natomiast taka kolejność jest częsta. Pomimo tego, wyrażenia takie jak polityczny pluralizm (political pluralizm) lub wirtualna rzeczywistość (virtual reality) stały się normą.

- określanie rzeczownika innym rzeczownikiem, na przykład zamiast myjnia samochodowa Polacy używają terminu auto-myjnia (car wash), lub zamiast wyrażenia plan biznesowy stosuje się biznesplan (business plan).

- Kalki przyimkowe: “logo zapisane dla WinWorda” (logo recorded for WinWord) w którym przyimek dla jest kalką angielskiego przyimka for. Wyrażenie to powinno być przetłumaczone jako “logo zapisane w WinWordzie”.

Zapożyczenie językowe to element języka wywodzący się z obcej mowy i posiadający cechy do niej przynależne, które stały się powszechnie używane w innym języku narodowym. Zapożyczenia świadczą o wymianie kulturowej pomiędzy różnymi narodami i postępującej wymianie informacji. Ich występowanie w języku polskim nasiliło się wyjątkowo silnie w ostatnich dwudziestu pięciu latach. Wynika to z rozwoju Internetu, otwarcia granic pomiędzy Polską a zachodnimi państwa oraz możliwości swobodnego podróżowania i komunikowania się z ludźmi z innych krajów. Niniejsza praca ma na celu omówić to zagadnienie, analizując zapożyczenia językowe we współczesnej polszczyźnie oraz dokonując oceny tego zjawiska.

Jako że zapożyczenia angielskie są najbardziej popularne, to właśnie im chciałbym poświęcić najwięcej czasu. Ich obecność szczególnie wyraźnie widoczna jest w nowomowie komputerowej, która apogeum swojego rozwoju przeżywa w czasach nam obecnych. W języku komputerów i Internetu najlepiej zauważalne są zwroty z języka angielskiego, takie jak bit, dekoder, interfejs, bot czy CD-ROM. Konieczność zastosowania anglicyzmów w tym przypadku jest spowodowana faktem, iż polski język nie posiada ekonomicznych i łatwo przyswajalnych odpowiedników słów opisujących jakże nowoczesną rzeczywistość komputerów i Internetu. Wszak takie określenia jak smartfon, tablet czy SPAM są tak oczywiste i tak dobrze opisują swoje desygnaty, że ich zastąpienie polskimi odpowiednikami byłoby jeśli nie niemożliwe, to z pewnością kuriozalne. Podobnie rzecz się ma z motoryzacją, gdzie wiele zwrotów ma swoje źródła w języku angielskim. W tym kontekście wystarczy wymienić wyrażenia typu tunning, kabriolet, trolejbus, skuter, kombi czy driffting. Wszystkie one funkcjonują na co dzień we współczesnej mowie i precyzyjnie określają świat samochodów. Z kolei kiedy idziemy do restauracji, nierzadko zamawiamy amerykańskiego hamburgera. W zwykłym sklepie spożywczym możemy natomiast kupić takie produkty jak whisky, keczup, czipsy, snacki czy oranżada, które pochodzą wprost z języka angielskiego. Po upadku PRL-u słownictwo związane z żywieniem wzbogaciło się jednak nie tylko o zapożyczenia angielskie, ale również włoskie. Wśród włoskich zwrotów króluje z całą pewnością pizza, którą przed nastaniem ostatniej dekady XX wieku Polacy mogli zajadać się co najwyżej podczas wątpliwego do zrealizowania wyjazdu zagranicę. Tak samo rzecz ma się z kawą cappuccino czy latte, a także z lasagną czy różnego rodzaju pastami. Podobnie niewielu z nas znało produkowany we Włoszech ser ricotta czy pierożki zwane ravioli.

Równie ciekawym zjawiskiem, które szczególnie mocno nasiliło się na początku XXI wieku, jest nadawanie angielskojęzycznych nazw różnego rodzaju stanowiskom pracy. Account Manager to określenie stosowane zamiennie do kierownika ds. spraw kluczowych, Art Directorem nazywa się dyrektora artystycznego, webdesignerem twórcę stron internetowych, CEO – prezesa firmy, Product Manager to w rozumieniu korporacyjnej nowomowy osoba odpowiedzialna za rozwój danego produktu, a webdeveloper to człowiek zajmujący się szeroko rozumianym inwestowaniem w Internecie. Choć powyższe zapożyczenia mogą razić w oczy swoją nachalnością, ponieważ w istocie w języku polskim istnieją ich zamienniki, to mają do spełnienia określoną funkcję. Mianowicie zastosowanie anglojęzycznych nazw stanowisk ma na celu zwiększenie ich prestiżu, dzięki czemu pracownicy mogą czuć się bardziej dowartościowani. Moim zdaniem lepiej byłoby jednak, gdyby ich pewność siebie wynikała z wysokich zarobków lub szans rozwoju, jakie oferuje dana firma, a nie z angielskich nazw stanowisk. Na marginesie tematyki zarobkowej warto przywołać powszechnie stosowane niemieckie słówko gastarbeiter, które odnosi się do osób emigrujących do Niemiec w celach zarobkowych. Przed ostatnią dekadą XX wieku było to praktycznie niemożliwe z uwagi na trudności związane z przekroczeniem granicy, dlatego niezbyt często posługiwano się wyrażeniem gastarbeiter. W słownictwie odnoszącym się do mody można natomiast odnaleźć wpływ języka francuskiego. Dzięki rozwojowi gospodarczemu po upadku PRL-u możliwe stało się bowiem świadczenie usług kosmetycznych oraz znaczene poszerzenie oferty sklepów sprzedających wszelkiego rodzaju odzież. W efekcie do języka polskiego na stałe weszły takie zwroty jak manicure, pedicure, butik czy atelier. Wśród osób interesujących się modą i stylem można zaś często usłyszeć, że ich ubrania pochodzą z kolekcji haute couture lub prêt-à-porter.

O ile do tej pory omawiałem wpływ języków obcych na język polski, kategoryzując zapożyczenia na odnoszące do poszczególnych sfer ludzkiego życia, o tyle teraz chciałbym skupić się na ich analizie pod względem składniowym. Mianowicie zapożyczenia dzieli się na strukturalne, które nazywane są także kalkami językowymi, oraz właściwe,semantyczne, fonetyczne i sztuczne. Najbardziej oczywistym zastosowaniem zapożyczeń są tzw. kalki językowe, polegające na bezpośrednim przetłumaczeniu wprost danego słowa z języka obcego na polski i wprowadzeniu go do mowy ojczystej. Przykładem takiego wyrażenia może być słowo mapa drogowa pochodzące od angielskiego zwrotu road map, które stosowane jest w polityce i w zarządzaniu. Z kolei wyrażenie rzeczoznawca jest kalką niemieckiego słowa Sachverständiger (czyt.: zachfersztendiger). Inny typ zapożyczeń – zapożyczenia właściwe – nie polega na tłumaczeniu, ale zaadaptowaniu jakiegoś wyrażenia do języka polskiego przy formie dostosowanej do polskiej wymowy i pisowni. Jako przykład można podać słowa brydż, mecz, komputer, wagon. Wszystkie one pochodzą z języka angielskiego, ale stały się na tyle powszechne i oczywiste również dla nas, że traktujemy je jak polskie słowa, dlatego należą one do zapożyczeń właściwych. Ich wymowa i znaczenie są identyczne jak w języku angielskim, ale sposób zapisywania odmienny. Z koleizapożyczenia semantyczne można zdefiniować jako wyrażenia, które zmieniają lub nadają nowe znaczenia słowom funkcjonującym w polszczyźnie już wcześniej. Oczywistym przykładem obrazującym ten rodzaj zapożyczeń jest zwrot mysz, odnoszący się do urządzenia obsługującego komputer i mający swoje źródło w angielskim słowie mouse. Zastosowanie tego typu zapożyczenia w polskiej mowie polega na niemożności znalezienia równie odpowiedniego i tak samo ekonomicznego słowa określającego urządzenie do obsługiwania komputera. Zapożyczenia fonetyczne, takie jak np. angielski interfejs, są natomiast słowami o identycznej wymowie jak ich angielskie odpowiedniki, jednak ich pisownia została dostosowana do polskich wymogów i uwarunkowań. Znaczenie zapożyczenia i angielskiego wyrazu w tym przypadku również się powiela. Ostatni rodzaj zapożyczeń występujących we współczesnej polszczyźnie to tzw.zapożyczenia sztuczne. Powstają one poprzez zestawienie dwóch wyrażeń z języka polskiego i angielskiego w jedno słowo. Jako przykład można podać można słowo telefon, składające się ze zwrotów tele, czyli na odległość, oraz fon, czyli dźwięk.

11. Nazwy własne polskie i obce - czy, jak i co odmieniać?

Nazwy własne :

• imiona, nazwiska, przezwiska, pseudonimy, przydomki ludzi, bogów i zwierząt, a coraz częściej i roślin, tajfunów, huraganów, a nawet rzeczy: Alina, Górecki;

• nazwy geograficzne i miejscowe oraz astronomiczne: Afryka, Francja,

• nazwy firm, urzędów, instytucji, organizacji, partii, sklepów, restauracji: Microsoft,

• nazwy marek i znaków handlowych, imprez, nagród, odznaczeń: Adidas,

• nazwy pomników, budynków, miejsc pamięci: Syrenka

W nazwach własnych piszemy wielkimi literami wszystkie wyrazy prócz przyimków i spójników: Prawo i Sprawiedliwość, Jan bez Ziemi

1. Odmiana polskich imion:

Wszystkie imiona polskie powinny być odmieniane (jedyny wyjątek: Beatrycze).

Odmiana nazwisk żeńskich:

Odmieniają się tylko nazwiska kobiet zakończone na -a:

a) Nazwiska o zakończeniu -owa, -ewa odmieniają się tak jak przymiotniki, np.

Bogolubowa, Bogolubowej, Bogolubową.

b) Natomiast pozostałe nazwiska żeńskie zakończone na -a odmieniają się tak jak rzeczowniki pospolite o podobnym zakończeniu:

Masina, Masiny, Masinie, Masinę, z Masiną .

Odmiana nazw obcych:

W języku polskim – jeśli to tylko możliwe – należy odmieniać obce nazwy własne.

2. Odmiana imion obcych:

a) Odmieniają się wszystkie imiona żeńskie zakończone na samogłoskę -a, np. Cynthia (Cynthii, Cynthię),

Pozostałe imiona żeńskie nie odmieniają się, np. Alice

b) Odmieniają się imiona męskie zakończone w wymowie:

-na spółgłoskę, np. Francis (Francisa, z Francisem, o Francisie),

-na -a, np. Sasza (Saszy, Saszę),

-na -o, np. Benito (Benita, o Benicie),

-na -y, np. Henry (Henry’ego, o Henrym),

-na -i, np. Giovanni (Giovanniego, o Giovannim).

Odmiana nazwisk obcych:

-Jeśli nazwisko obce ma końcówkę niewymawianą (kończącę się na -e nieme), to w przypadkach zależnych otrzymuje końcówkę polską po apostrofie np. Moore.*

-W innych sytuacjach apostrofu nie stawiamy, nawet jeśli wymaga to spolszczenia końcówki oryginalnej (nazwiska kończące się na spółgłoskę wymawianą np. Bush, Eisenhower, kończące się na –y po samogłosce np. Disney)*

- Nazwiska zakończone na -a (dotyczy to nie tylko nazwisk angielskich i francuskich, ale również wszystkich innych) odmieniają się tak samo jak polskie nazwiska zakończone na -a, np. Głowala

- Nazwiska kończące się w wymowie na -y lub -i po spółgłosce (pisane przez -y, -ie) odmieniają się w liczbie pojedynczej jak przymiotniki. Nazwiska na -y w dopełniaczu, celowniku i bierniku piszemy z apostrofem, gdyż głoska ta w tych przypadkach nie jest wymawiana; nazwiska zakończone na -i, -ie — bez apostrofu, np. Lully, Lully’ego, Lully’emu, z Lullym, o Lullym;

*Bez apostrofu:

Bush - Busha, z Bushem, o Bushu

*Z apostrofem:

Moore - Moore’a, Moore’owi, z Moore’em

Nieodmienne obce nazwiska:

Delacroix, Cocteau, Rousseau,

Nazwiska spolszczone:

-Wiliam Shakespeare - Wiliama Shakespeare’a, o Shakespearze,

3. Odmiana obcych nazw geograficznych:

Nie odmieniają się:

a) nazwy, dla których nie można ustalić wzoru odmiany, np. Baku, Capri, Dhaulagiri, Fidżi, Haiti, Hanoi, Peru, Soczi, Turku;

b) nazwy akcentowane na ostatniej sylabie, np. Calais, Clermont–Ferrand, Verdun;

c) nazwy rodzaju nijakiego zakończone na -um, np. Bochum, również spolszczone: Bizancjum, Monachium.

UWAGA: Nazwy własne rodzaju męskiego zakończone na -um odmieniamy, np. Chartum (-mu, -mie).

Powinniśmy odmieniać następujące nazwy geograficzne:

a) nazwy słowiańskie, np. Burgas (-su),

b) zakończone na -a, np. Atlanta (-ncie, -ntę),

Imiona polskie

 

Wszystkie imiona polskie powinny być odmieniane (jedyny wyjątek:Beatrycze). Imiona odmieniamy, korzystając z wzorca odmiany wyrazów pospolitych o podobnie zakończonym temacie.

Imiona obce

 

W odniesieniu do imion obcych stosujemy następujące zasady:

a) Odmieniają się wszystkie imiona żeńskie zakończone na samogłoskę -a, np. Cynthia (Cynthii, Cynthię), Linda (Lindzie, Lindę), Martha (Marcie, Marthę),Ornella (Ornelli, Ornellę), Virginia (Virginii, Virginię).

Pozostałe imiona żeńskie nie odmieniają się, np. Alice, Catherine, Deborah,Hannah, Jacqueline, Margaret, Michelle, Sally, Sarah, Scarlett, Shirley.

b) Odmieniają się imiona męskie zakończone w wymowie:

— na spółgłoskę, np. Francis (Francisa, z Francisem, o Francisie), John(Johna, z Johnem, o Johnie), Jacques (Jacques’a, z Jakiem, o Jacques’u), Helmut(Helmuta, z Helmutem, o Helmucie), Joseph (Josepha, z Josephem, o Josephiea. Josefie), Ralph (Ralpha, z Ralphem, o Ralphie a. Ralfie), Keith (Keitha, z Keithem, o Keicie a. Keisie), Kenneth (Kennetha, z Kennethem, o Kennecie a.Kennesie), Max (Maksa a. Maxa, z Maksem a. Maxem, o Maksie a. Maxie),Felix (Feliksa a. Felixa, z Feliksem a. Felixem, o Feliksie a. Felixie), Gustav(Gustava, z Gustavem, o Gustawie a. Gustavie), Yves (Yves’a, z Yves’em, o Ywiea. Yvie);

— na -a, np. Sasza (Saszy, Saszę), Wołodia (Wołodii, Wołodię);

— na -o, np. Benito (Benita, o Benicie), Claudio (Claudia, o Claudiu), Paavo(Paava, o Paawie a. Paavie);

— na -y, np. Henry (Henry’ego, o Henrym), Zachary (Zachary’ego, o Zacharym);

— na -i, np. Giovanni (Giovanniego, o Giovannim), Luigi (Luigiego, o Luigim).

Można też odmieniać imiona zakończone na -e oraz akcentowane na ostatniej sylabie, np.

Cesare [Czezare] (Cesarego, o Cesarem a. ndm);

Imre (Imrego, o Imrem a. ndm);

André (Andrégo, o Andrém a. ndm);

René (Renégo, o Reném a. ndm);

Louis (Louisa, o Louisie a. ndm).

Jednak imię François nie jest odmieniane.

Pozostałe imiona męskie nie odmieniają się, np. Andrew [Endrju], Hugh[Hju], Matthew [Metju], Radu [Radu].

Odmiana nazwisk. Uwagi ogólne

 

Ogólne zalecenie dotyczące odmiany nazwisk polskich i obcych jest następujące: jeśli tylko jest możliwe przyporządkowanie nazwiska jakiemuś wzorcowi odmiany, należy je odmieniać.

Wybór odpowiedniego wzorca odmiany zależy głównie od: 1) płci właściciela, 2) jego narodowości, 3) zakończenia nazwiska (może chodzić albo o zakończenie fonetycznej formy nazwiska, albo o zakończenie tematu).

Odmiana i pisownia nazwisk żeńskich

 

Odmieniają się tylko nazwiska kobiet zakończone na -a:

a) Nazwiska o zakończeniu -owa, -ewa odmieniają się tak jak przymiotniki, np.

Bogolubowa, DCMs. Bogolubowej, BN. Bogolubową;

Paduczewa, DCMs. Paduczewej, BN. Paduczewą.

b) Natomiast pozostałe nazwiska żeńskie zakończone na -a odmieniają się tak jak rzeczowniki pospolite o podobnym zakończeniu:

Masina, D. Masiny, CMs. Masinie, B. Masinę, N. z Masiną (jak kalina);

Fonda, D. Fondy, CMs. Fondzie, B. Fondę, N. z Fondą (jak rada);

Berganza [Berganca], DCMs. Berganzy, B. Berganzę, N. z Berganzą (jak taca).

Odmiana i pisownia nazwisk męskich

 

W następnych paragrafach scharakteryzowane zostaną podstawowe zasady odmiany i pisowni obcych nazw osobowych odnoszących się do mężczyzn. Ze względu na znaczny stopień trudności tego problemu językowego zasób osobowych nazw własnych w słowniku został rozbudowany. W artykułach hasłowych podane też zostały trudniejsze formy fleksyjne tworzone od nazw własnych. Pewnym ułatwieniem dla piszących może być możliwość nieodmieniania niektórych nazwisk, dopuszczalna, gdy nazwisko zostanie poprzedzone imieniem lub rzeczownikiem pospolitym (np. minister, prezydent). Możliwość taka dotyczy głównie nazwisk zakończonych na -e, -o oraz akcentowanych na ostatniej sylabie. Jeśli tego rodzaju możliwość w odniesieniu do określonego nazwiska istnieje, zostało to podane w odpowiednim artykule hasłowym słownika.

Nazwiska angielskie i francuskie

 

Zapisując nazwiska angielskie i francuskie, zachowujemy ortografię oryginału, np. Hillary, Purcell, Reagan, Dumas, Poussin, de Gaulle. Istnieje nieliczna grupa nazwisk, które mają wariantywną pisownię — oryginalną lub spolszczoną: Shakespeare — Szekspir, Washington — Waszyngton,Voltaire — Wolter, Molière — Molier, Rousseau — Russo, Balzac — Balzak,Chopin — Szopen oraz Montesquieu — Monteskiusz i Descartes — Kartezjusz(w dwu ostatnich wypadkach spolszczono łacińską formę nazwisk).

Nazwiska zakończone w piśmie na spółgłoski lub -y po samogłosce

 

Nazwiska angielskie i francuskie zakończone w piśmie:

a) na spółgłoskę wymawianą, np. Auber, Bush, Eisenhower, Eliot, Pasteur;

b) na spółgłoskę niewymawianą, np. Anouilh, Diderot, Jouvet, Mitterrand,Villon; wyjątkiem są niewymawiane spółgłoski -s, -x (zob. 66.5.66.7.);

c) na -y po samogłosce, np. Disney, Macaulay, Shelley;

otrzymują końcówki polskie bez apostrofu, np.

Auber, Aubera, z Auberem, o Auberze;

Bush, Busha, z Bushem, o Bushu;

Anouilh, Anouilha, z Anouilhem, o Anouilhu;

Mitterrand, Mitterranda, z Mitterrandem, o Mitterrandzie;

Disney, Disneya, z Disneyem, o Disneyu.

UWAGA: Nazwiska angielskie zakończone na -ow oraz -owe nie odmieniają się w miejscowniku, np.

Arrow, Arrowa, z Arrowem, o Arrow;

Longfellow, Longfellowa, z Longfellowem, o Longfellow;

Marlow, Marlowa, z Marlowem, o Marlow;

Crowe, Crowe’a (a. Crowe), z Crowe’em (a. Crowe), o Crowe.

Nazwiska zakończone na -e nieme

 

Nazwiska angielskie i francuskie zakończone na -e nieme (tzn. niewymawiane) otrzymują polskie końcówki po apostrofie, np.

Donne, Donne’a, Donne’owi, z Donne’em;

Larousse, Larousse’a, Larousse’owi, z Larousse’em;

Malebranche, Malebranche’a, Malebranche’owi, z Malebranche’em;

Montaigne, Montaigne’a, Montaigne’owi, z Montaigne’em;

Moore, Moore’a, Moore’owi, z Moore’em;

Wallace, Wallace’a, Wallace’owi, z Wallace’em.

Dotyczy to także nazwisk, w których po -e niemym pojawia się spółgłoska, np.

Combes, Combes’a, Combes’owi, z Combes’em;

Descartes, Descartes’a, Descartes’owi, z Descartes’em.

Jeśli w którymś przypadku gramatycznym brzmienie głoski (...)

 

Jeśli w którymś przypadku gramatycznym brzmienie głoski kończącej temat nazwiska angielskiego lub francuskiego jest w języku polskim inne niż w języku oryginalnym, wówczas zakończenie tego nazwiska piszemy zgodnie z pisownią polską, a -e nieme i apostrof — jeśli występują — pomijamy:

Barthes [Bart], z Barthes’em, o Barcie;

Grant [Grant], z Grantem, o Grancie;

Ingres [Ęgr], z Ingres’em, o Ingrze;

Joyce [Dżojs], z Joyce’em, o Joysie;

Mauriac [Moriak], z Mauriakiem, o Mauriacu;

Proust [Prust], z Proustem, o Prouście;

Remarque [Remark], z Remarkiem, o Remarque’u;

Robespierre [Robespier], z Robespierre’em, o Robespierze;

Ronsard [Ronsar], z Ronsardem, o Ronsardzie;

Smith [Smit a. Smis], ze Smithem, o Smicie (a. Smisie).

Inne przykłady: Combes — o Combie, Descartes — o Descarcie, Manet — o Manecie.

UWAGA 1: Powyższe zalecenie odnosi się również do nazwisk typuUstinov, Sainte-Beuve, które jakkolwiek nie zmieniają w miejscowniku liczby pojedynczej brzmienia tematycznej spółgłoski, to jednak – zgodnie z ogólnymi zasadami pisowni głosek zmiękczonych w języku polskim (por.7.1., punkt b) – przyjmują odmianę następującą: Ustinov – o Ustinowie,Sainte-Beuve – o Sainte-Beuwie. Jednak szanując przyzwyczajenia użytkowników polszczyzny, dopuszcza się w tym przypadku gramatycznym również wersję pisaną przez -vi-.

UWAGA 2: Nazwiska odmienne, które mają w zakończeniu tematu literę -x, także przyjmują w deklinacji polskie końcówki, bez apostrofu, albo utrzymują w zapisie tematyczne -x-, np. Hendrix – Hendriksa, z Hendriksem,o Hendriksie albo Hendrixa, z Hendrixem, o Hendrixie (por. 9.5., punkt d).

Nazwiska zakończone na -a

 

Nazwiska zakończone na -a (dotyczy to nie tylko nazwisk angielskich i francuskich, ale również wszystkich innych) odmieniają się tak samo jak polskie nazwiska zakończone na -a, np. Głowala, Kudera, Lasota:

Zola, Zoli, Zolę, Zolą;

O’Hara, O’Harze, O’Harę, O’Harą;

Gambetta, Gambetcie, Gambettę, Gambettą.

W liczbie mnogiej nazwiska tu wymienione oraz w punktach 66.1.66.2.,66.3. również otrzymują polskie końcówki (odmieniają się tak, jak polskie nazwiska zakończone na spółgłoskę, np. Kurek, Gołąb):

Mitterrandowie, o Mitterrandach (możliwe też państwo Mitterrand, o państwu Mitterrand — zob. 64.);

Lumière’owie, o Lumière’ach (możliwe też bracia Lumière, o braciach Lumière — zob. 64.).

Nazwiska kończące się w wymowie na e

 

Nazwiska angielskie i francuskie kończące się w wymowie na e(zapisywane w postaci liter: -é, -ée, -ai, -eu, po których mogą następować niewymawiane litery spółgłoskowe -s lub -x) odmieniamy w liczbie pojedynczej jak przymiotniki, przy czym odcinamy początkowe samogłoski z końcówek przymiotnikowych -ego, -emu, -im (-ym), np.

Debré, Debrégo, Debrém;

Mérimée, Mériméego, Mériméem;

Montesquieu, Montesquieugo, Montesquieum.

a) Po niewymawianej literze spółgłoskowej -s lub -x stawiamy apostrof, np.

Beaumarchais, Beaumarchais’go, o Beaumarchais’m;

Marais, Marais’go, o Marais’m;

Rabelais, Rabelais’go, o Rabelais’m.

UWAGA: W niektórych opracowaniach zaleca się nieodmienność nazwiskBeaumarchais, Resnais, Marais, ale zalecenie to nie wydaje się uzasadnione; oczywiście gdy te nazwiska wystąpią wraz z imieniem, mogą pozostawać w formie nieodmiennej.

b) W liczbie mnogiej nazwiska zakończone w wymowie na -e zwykle występują z imionami, nazwami, tytułami — są więc nieodmienne, np.państwu Debré.

Nazwiska kończące się w wymowie na -y lub -i po spółgłosce

 

Nazwiska angielskie i francuskie kończące się w wymowie na -y lub -i po spółgłosce (pisane przez -y, -ie) odmieniają się w liczbie pojedynczej jak przymiotniki. Nazwiska na -y w dopełniaczu, celowniku i bierniku piszemy z apostrofem, gdyż głoska ta w tych przypadkach nie jest wymawiana; nazwiska zakończone na -i, -ie — bez apostrofu, np.

Lully, Lully’ego, Lully’emu, z Lullym, o Lullym;

Murphy, Murphy’ego, Murphy’emu, z Murphym, o Murphym;

O’Kelly, O’Kelly’ego, O’Kelly’emu, z O’Kellym, o O’Kellym;

Valéry, Valéry’ego, Valéry’emu, z Valérym, o Valérym;

Christie, Christiego, Christiemu, z Christiem, o Christiem;

Muskie, Muskiego, Muskiemu, z Muskiem, o Muskiem.

Nazwa własna Piotr Curie tradycyjnie pozostaje w postaci nieodmienianej (jednakże przede wszystkim ze względu na zwyczaj użycia tego nazwiska z imieniem, gdyby nazwisko Curie oznaczające mężczyznę wystąpiło samodzielnie, należałoby je odmieniać według powyższego wzoru).

a) Nazwiska Francuzów i Anglików zakończone na -i są z reguły pochodzenia włoskiego i odmieniają się tak jak podobne nazwiska włoskie — zob. 68.

b) W liczbie mnogiej nazwiska zakończone na -y, -i, -ie występują zwykle z imionami, nazwami, tytułami — toteż nie są odmieniane, np. bracia Kennedy (ale też wyjątkowo: o Kennedych), o państwu Murphy.

Nazwiska na -o, -oi, -au, -ou

 

Nazwiska angielskie i francuskie na -o, -oi, -au, -ou, również wtedy, gdy po tych samogłoskach następują niewymawiane -s, -x, są nieodmienne, np.Hugo, Cocteau, Despiau, Pompidou, Laclos, Delacroix, Giraudoux.

Wyjątkiem jest tylko spolszczona forma nazwiska Rousseau: Russo, Russa,z Russem, o Russie.

Odmiana i pisownia obcych nazw geograficznych

 

Nazwy geograficzne mogą funkcjonować w postaci spolszczonej bądź oryginalnej. Jednak — inaczej niż w przypadku niektórych nazwisk — nie ma zwyczaju stosowania pisowni wariantywnej. Pisownia Paris, Madrid,Roma zamiast Paryż, Madryt, Rzym nie jest akceptowana przez normy języka polskiego. Pisownię spolszczoną stosujemy tylko w odniesieniu do nazw państw, kilkudziesięciu dużych lub ważnych miast i sporadycznie do innych nazw geograficznych; w odniesieniu do pozostałych stosujemy pisownię oryginalną.

Odmiana nazw geograficznych spolszczonych opiera się na tych samych zasadach co odmiana rzeczowników pospolitych oraz rodzimych nazw własnych, chociaż w wielu wypadkach trudno ustalić, do którego wzorca odmiany należałoby włączyć określoną nazwę. Znacznie trudniejsza jest natomiast odmiana nazw niespolszczonych. Również w tym wypadku obowiązuje zasada odmieniania nazwy, jeśli tylko da się ją włączyć do określonego wzorca odmiany. Mimo to jednak liczne nazwy miejscowe pozostają nieodmienne, choć można by dla nich taki wzorzec łatwo znaleźć (zob. 70.5.). Ustalenia szczegółowe są następujące:

Nieodmienne nazwy geograficzne

Odmienne nazwy geograficzne

Nazwy zakończone na -o

Nazwy zakończone na -e

Nazwy nieodmienne

Nieodmienne obce nazwy geograficzne

 

Nie odmieniają się:

a) nazwy, dla których nie można ustalić wzoru odmiany, np. Baku, Capri,Dhaulagiri, Fidżi, Haiti, Hanoi, Peru, Soczi, Turku;

b) nazwy akcentowane na ostatniej sylabie, np. Calais, Clermont–Ferrand,Verdun;

c) nazwy rodzaju nijakiego zakończone na -um, np. Bochum, również spolszczone: Bizancjum, Monachium.

UWAGA: Nazwy własne rodzaju męskiego zakończone na -umodmieniamy, np. Chartum (-mu, -mie).

Odmienne nazwy geograficzne

 

Powinniśmy odmieniać następujące obce nazwy geograficzne:

a) nazwy słowiańskie, np. Burgas (-su), Hradec (-dca);

b) zakończone na -a, np. Atlanta (-ncie, -ntę), Casablanca (-nce, -ncę — można też pisać Casablanka), Parma (-mie, -mę).

UWAGA: W odmianie nazw miejscowości włoskich typu Piacenza, Vicenza,Monza, Faenza, wymawianych [pjaczenca, wiczenca, monca, faenca], stosuje się wzór odmiany nazw polskich Dębica, Kamienica:

Piacenza, DCMs. Piacenzy, B. Piacenzę, N. Piacenzą;

Vicenza, DCMs. Vicenzy, B. Vicenzę, N. Vicenzą.

Nazwy zakończone na -o

 

Obce nazwy geograficzne zakończone na -o mogą się odmieniać, jednak większość z nich zwyczajowo nie jest odmieniana, np. Chicago, Orinoko,Oslo, Palermo.

Nazwy zakończone na -e

 

Wśród obcych nazw geograficznych zakończonych na -e odmieniają się tylko słowiańskie, np. Pardubice (-bic), Skopje (-pja), ale w tym wypadku jest też możliwa nieodmienność; niesłowiańskie, zarówno te, które mają -e w wymowie, jak i te, które mają -e tylko w zapisie, pozostają nieodmienne, np. Halle, Udine, Newcastle.

Nazwy nieodmienne

 

Zwyczajowo nie odmieniamy wielu innych obcych nazw miast (nie wspomnianych w punktach 70.1.70.2.70.3.70.4.), np. Bonn, Los Angeles,Nottingham.

Nieodmienne obce nazwy geograficzne

 

Nie odmieniają się:

a) nazwy, dla których nie można ustalić wzoru odmiany, np. Baku, Capri,Dhaulagiri, Fidżi, Haiti, Hanoi, Peru, Soczi, Turku;

b) nazwy akcentowane na ostatniej sylabie, np. Calais, Clermont–Ferrand,Verdun;

c) nazwy rodzaju nijakiego zakończone na -um, np. Bochum, również spolszczone: Bizancjum, Monachium.

UWAGA: Nazwy własne rodzaju męskiego zakończone na -umodmieniamy, np. Chartum (-mu, -mie).

Zwyczaj językowy

 

Wiele nazw własnych zostało spolszczonych już dawno i ta pisownia, niezależnie od tego, czy jest zgodna z zasadami transkrypcji lub transliteracji, czy nie, musi zostać zaakceptowana przez wszystkich użytkowników języka polskiego. W tej grupie możemy wymienić takie nazwiska, jak Szekspir, Waszyngton, Szopen, Russo, Wolter, Molier, Balzak. Natomiast wśród nazw geograficznych Paryż, Londyn, Akwizgran, Rzym,Mediolan, Wenecja i wiele innych.

Zasady transkrypcji i transliteracji

 

Zasady transkrypcji i transliteracji. W słowniku podajemy odpowiednie objaśnienia i tabele umożliwiające zapis nazw pochodzących z obcych języków, w których stosuje się znaki nieobecne w polskim alfabecie. W pracach naukowych oraz we wszystkich innych opracowaniach, od których żądamy maksymalnej ścisłości zapisu, powinniśmy stosować transliterację, która jest najbardziej precyzyjna, a w wielu wypadkach unormowana, również na mocy norm międzynarodowych (inne uwagi na ten temat — zob. 75.).

Wymagania systemu językowego

 

W tym wypadku chodzi głównie o to, że w języku polskim — jeśli to tylko możliwe — powinniśmy odmieniać obce nazwy własne. Pojawia się w związku z tym wiele problemów ortograficznych. Jak na przykład zapisać miejscownik od nazwisk Brandt, Peirce? Poprawnymi formami są: o Brandcie, Peirsie (nie: Brandtcie, Peirce’ie).

Pochodzenie nazwy własnej a sposób zapisu i odmiany

 

Pochodzenie nazwy własnej również jest czynnikiem, który ma istotny wpływ na sposób zapisu i odmiany. Anglosaskie imię Charles [Czarls] będziemy odmieniać Charlesa [Czarlsa], Charlesie [Czarlsie]; francuskie imię o identycznej pisowni wymawiamy jednak [Szarl], więc musi ono w takim razie być odmieniane tak: [Szarla, Szarlu], a zapisywane Charles’a,Charles’u (por. 66.2.).

UWAGA: W nazwach arabskich często stosuje się w zapisach rodzajnik określony al- (i jego odmiany: ad-, an-, ar-, as-, asz-, at-, az-). Z rzeczownikami pospolitymi piszemy go małą literą, np. al-kaida (= baza), natomiast z nazwami własnymi – wielką, np. Al-Kaida (= Baza), Al-Asad, An-Nasirijja. Jeśli jednak nazwa własna składa się z dwóch elementów, to rodzajnik przed drugim elementem piszemy małą literą: Hafiz al-Asad,Anwar as-Sadat.

12. Norma współczesnej polszczyzny - stanowienie, kodyfikacja, stan

POJĘCIE NORMY JĘZYKOWEJ

Współczesne językoznawstwo korzysta z wewnątrzjęzykowego ujęcia normy. Pod pojęciem normy językowej rozumiemy „zbiór tych elementów językowych, które są w pewnym okresie uznane przez jakąś społeczność za wzorcowe, poprawne albo co najmniej dopuszczalne.” Taka definicja normy językowej dąży do zobiektywizowania tego pojęcia i przeniesienia rozważań z nią związanych na płaszczyznę wewnątrzjęzykową. Normę traktuje się dziś jako jeden z poziomów wewnętrznej organizacji języka – obok systemu rozumianego jako zbiór możliwości i obok mówienia, czyli swobodnej działalności komunikatywnej.

1. STAN NORMY JĘZYKOWEJ

Stan normy językowej dzieli się na dwa poziomy – normę wzorcową i użytkową.

Norma wzorcowa właściwa jest kontaktom publicznym, oficjalnym. W dużej mierze odpowiada dawnemu stylowi wyższemu, obejmuje też style naukowy i urzędowy. Na użytkowników języka stawia wysokie wymagania, liczy się bardziej z tradycją, estetyką, precyzją form językowych. Norma wzorcowa jest względnie jednolita, ponadśrodowiskowa, ale nieco zróżnicowana regionalnie. Norma ta jest używana świadomie, z poczuciem wartości semantycznej i stylistycznej, akceptowana jest przez zdecydowaną większość wykształconych Polaków.

Norma wzorcowa powinna bezwzględnie obowiązywać filologów polonistów, którzy występują bardzo często w roli recenzentów (korektorów i opiniodawców) wobec osób spoza swojego środowiska. Dla tych osób wskazana jest norma standardowa.

Norma użytkowa

2. KODYFIKACJA NORMY

Kodyfikacja normy to podtrzymywanie swoistości i integralności języka ogólnego, w tym usuwanie elementów naruszających jego wewnętrzną harmonię i równowagę oraz użycie środków istotnych dla społeczeństwa. Kodyfikacja jest, w przeciwieństwie do normy, elementem zewnętrznym wobec samego języka.

Kodyfikacja to także odbicie normy językowej w konkretnych gramatykach czy słownikach. Dlatego pasuje do kodyfikacji również porównanie, że jest ona „fotografią normy” wydobytej z tekstów językowych. Podstawowym obowiązkiem kodyfikatora jest oceniać elementy uzusu utrwalone w normie, jednak decydować musi przede wszystkim na podstawie analizy tego, co w języku istnieje, a nie na podstawie swoich doświadczeń i preferencji.

Charakterystyczną cechą normy jest jej zmienność i ciągła ewolucja, kodyfikacja zaś jest w swej istocie statyczna, rejestruje stan normy w tym okresie, w którym dany słownik czy podręcznik powstawał. Wynikiem tej sprzeczności jest stały rozziew między normą a kodyfikacją. Rozziew ten nie powinien być duży, jednak kodyfikacja już z natury rzeczy pozostaje zawsze nieco z tyłu za aktualną normą językową. Dlatego wygodnym wydaje się uwzględnienie dwóch poziomów normy (wzorcowej i użytkowej) i stosunkowo częste dokonywanie kodyfikacji, które powinno odbywać się co 10-15 lat. Można byłoby też uwzględnić wariant kodyfikacji częściowej („kroczącej”), czyli wprowadzanie na bieżąco, co dwa, trzy lata, nowych rozstrzygnięć kodyfikacyjnych i publikowanie ich w nowych wydaniach słowników normatywnych. Jednak ta propozycja wymagałaby od świadomych użytkowników języka nieustannego „śledzenia” kodyfikacji normy językowej. Lepszy wydaje się więc wariant „skokowej” kodyfikacji .

3. STANOWIENIE NORMY JĘZYKOWEJ

Jedną z wielu funkcji języka polskiego jest stanowienie, nazwana funkcją stanowiącą (kreatywną) – powoduje powstanie nowego stanu rzeczywistości pozajęzykowej, tworzenie, kreowanie świata przedstawionego w utworach literackich.

13. Using corpora in language teaching and learning

What is a corpus?

Loosely defined, a corpus is "any body of text" (McEnery& Wilson, 2001, p. 197), that is, any collection of recorded instances of spoken or written language. For example, a pile of written assignments (e.g., essays) waiting to be marked is, roughly speaking, a corpus. Let us assume that these assignments have been written by students about to start a language course, and that the teacher has not taught the students before. The teacher can read the essays to form a general impression of the strengths and needs of the new class, but he/she may also want to focus on specific areas of interest. For example, while reading the assignments, the teacher may realise that the learners frequently make collocation errors. In order to examine the problem more closely, the teacher can go through the assignments, locate and list the unacceptable collocations, and determine whether there are any recurring patterns, that is, whether learners need help with the collocations of particular words, perhaps words normally associated with the topic of the assignment.

Types of corpora

Corpora come in many shapes and sizes, because they are built to serve different purposes. [3] There are two philosophies behind their design, leading to the distinction between reference and monitor corpora. Reference corpora have a fixed size; that is, they are not expandable (e.g., the British National Corpus), whereas monitor corpora are expandable; that is, texts are continuously being added (e.g., the Bank of English). Another design-related distinction is whether a corpus contains whole texts, or merely samples of a specified length. The latter option allows a greater variety of texts to be included in a corpus of a given size.

In terms of content, corpora can be either general, that is, attempt to reflect a specific language or variety in all its contexts of use (e.g., the American National Corpus), or specialised, that is, aim to focus on specific contexts and users (e.g., Michigan Corpus of Academic Spoken English), and they can contain written or spoken language. Corpora can also represent the different varieties of a single language. For example, the International Corpus of English (ICE) contains one-million-word corpora representative of different varieties of English (British, Indian, Singaporean, etc.). As implied in the previous section, corpora may contain language produced by native or non-native speakers (usually learners). Finally, corpora can be monolingual (i.e., contain samples of only one language), or multilingual. Multilingual corpora are of two types: they can contain the same text-types in different languages, or they can contain the same textstranslated into different languages, in which case they are also known as parallel corpora (Hunston, 2002; Kennedy, 1998; McEnery& Wilson, 2001; Meyer, 2002). [-3-]

Creating a useful corpus

First, the texts a corpus is to contain are selected and stored in electronic format. Written texts, if they are not already in electronic form (e.g., downloaded from the Internet, submitted by learners on a disc or CD-ROM, or sent by e-mail), must be scanned; spoken texts must be recorded and transcribed. [4] The result of this stage is a raw corpus. Although a raw corpus can yield some information about language use, its usefulness is limited. For example, although the frequency of the word drive in the raw corpus can be determined, we will not know how many times it occurs as a noun and how many as a verb. Of course, different instances could be counted manually, but this would defy the purpose of compiling a corpus.

The utility and flexibility of a corpus can be increased by adding coding that a computer can recognise. Labels (or tags) are attached to the words, phrases, sentences, paragraphs, sections, or to entire texts in the corpus. Information related to non-linguistic properties of the texts is referred to as mark-up. Mark-up may give information about the source of the text (e.g., book, newspaper), the date of publication or broadcast, the author or participants, or text sections (e.g., introduction, conclusion). Information related to the linguistic properties of the texts in the corpus is called annotation. Most L1 corpora are annotated for the part of speech and form of the words (e.g., singular/plural, present/past tense). This type of annotation is also called grammatical annotation, or tagging. For example, the word teaching would be tagged 'teaching_VVI' if it was a present participle (as in 'she was teaching'), and 'teaching_NN1' if it was used as a noun (as in 'language teaching'). Corpora can also be annotated for lexical sense (e.g., lexis denoting belief, expectation) and pragmatic function (e.g., request, invitation). [5] What kind of mark-up or annotation is added to a corpus is determined by the information to be extracted. Sample 1 shows the three questions asked in the second paragraph of this article, annotated for part of speech. [6]

Corpora in the classroom

Before examining ways in which corpora can be used as (sources of) classroom materials, we need to clarify that a data-driven, awareness-raising approach is not necessarily linked to the use of corpora. Teachers can use texts containing the target language features and, through awareness-raising tasks, guide learners to discover the behaviour of lexical, grammatical or discourse elements. Therefore, it would be helpful to distinguish between text-based and corpus-based approaches to data-driven learning. [15]

Corpora can be used in language teaching in two ways (Leech, 1997, p. 10): The soft version, requires only the teacher to have access to, and the skills to use, a corpus and the relevant software. The teacher prints out examples from the corpus and devises the tasks. Learners work with these corpus-derived and corpus-based materials (Bernardini, 2004; Granger & Tribble, 1998; Osbourne, 2000; Tribble, 1997b; Tribble & Jones, 1990). Usually corpus examples are in the form of a concordance, where the word or structure being examined in the task is in the middle, so that patterns are more easily discernible (see Sample 2). The hard version, requires learners to have direct access to computer and corpus facilities and have the skills to use them (Aston, 1996). Tasks can be devised by the teacher (Tognini-Bonelli, 2001), contained within a CALL programme (Hughes, 1997; Milton, 1998), or chosen by the learners, with or without the teacher's guidance (Bernardini, 2002).

Taking into consideration the aims of a lesson, the design or selection of materials and the management of learning, in relation to teachers and learners, we can define combinations that cover the spectrum from totally teacher-centred to totally learner-centred. At the teacher-centred end, the teacher decides on the aims of the lesson, selects/designs the materials and manages the lesson. At the learner-centred end, the learner decides on all three, with the teacher or computer programme acting as facilitator and guide. Of course, there can be intermediate combinations, particularly when decisions are taken collaboratively between teacher and learners.

13. Using Corpora in language teaching and learning

In linguistics, a corpus (plural corpora) or text corpus is a large and structured set of texts (nowadays usually electronically stored and processed). They are used to do statistical analysis and hypothesis testing, checking occurrences or validating linguistic rules within a specific language territory.

A corpus may contain texts in a single language (monolingual corpus) or text data in multiple languages (multilingual corpus). Multilingual corpora that have been specially formatted for side-by-side comparison are called aligned parallel corpora.

In order to make the corpora more useful for doing linguistic research, they are often subjected to a process known as annotation. An example of annotating a corpus is part-of-speech tagging, or POS-tagging, in which information about each word's part of speech (verb, noun, adjective, etc.) is added to the corpus in the form of tags.

Another example is indicating the lemma (base) form of each word. When the language of the corpus is not a working language of the researchers who use it, interlinear glossing is used to make the annotation bilingual.

Some corpora have further structured levels of analysis applied. In particular, a number of smaller corpora may be fully parsed. Such corpora are usually called Treebanks or Parsed Corpora. The difficulty of ensuring that the entire corpus is completely and consistently annotated means that these corpora are usually smaller, containing around one to three million words. Other levels of linguistic structured analysis are possible, including annotations for morphology, semantics and pragmatics.

Corpora are the main knowledge base in corpus linguistics. The analysis and processing of various types of corpora are also the subject of much work in computational linguistics, speech recognition and machine translation, where they are often used to create hidden Markov models for part of speech tagging and other purposes. Corpora and frequency lists derived from them are useful for language teaching. Corpora can be considered as a type of foreign language writing aid as the contextualised grammatical knowledge acquired by non-native language users through exposure to authentic texts in corpora allows learners to grasp the manner of sentence formation in the target language, enabling effective writing.

The corpus-based approach to linguistics and language education has gained prominence over the past four decades, particularly since the mid-1980s. This is because corpus analysis can be illuminating ‘in virtually all branches of linguistics or language learning’.

One of the strengths of corpus data lies in its empirical nature, which pools together the intuitions of a great number of speakers and makes linguistic analysis more objective.

The use of corpora in language teaching and learning has been more indirect than direct. This is perhaps because the direct use of corpora in language pedagogy is restricted by a number of factors including, for example, the level and experience of learners, time constraints, curricular requirements, knowledge and skills required of teachers for corpus analysis and pedagogical mediation, and the access to resources such as computers, and appropriate software tools and corpora, or a combination of these (see the concluding section for further discussion).

It has been noted that non-corpus-based grammars can contain biases while corpora can help to improve grammatical descriptions.

For language pedagogy the most important developments in lexicography relate to the learner dictionary. Yet corpus-based learner dictionaries have a quite short history. It was only in 1987 that the Collins Cobuild English Language Dictionary (Sinclair 1987) was published as the first ‘fully corpus-based’ dictionary. Yet the impact of this corpus-based dictionary was such that most other publishers in the ELT market followed Collins’ lead.

A simple yet important role of corpora in language education is to provide more realistic examples of language usage that reflect the complexities and nuances of natural language.

Corpora are useful in this respect, not only because collocations can only reliably be measured quantitatively, but also because the KWIC (key word in context) view of corpus data exposes learners to a great deal of authentic data in a structured way. Our view is line with Kennedy (2003), who discusses the relationship between corpus data and the nature of language learning, focusing on the teaching of collocations. The author argues that second or foreign language learning is a process of learning ‘explicit knowledge’ with awareness, which requires a great deal of exposure to language data.

In addition to the lexical focus, corpus-based teaching materials try to demonstrate how the target language is actually used in different contexts.

14 Second language acquisition theories (behaviourism, cognitivism, nativism, interactionism).

Theories of second-language acquisition

Theories of second-language acquisition are various theories and hypotheses in the field of second-language acquisition about how people learn a second language. Research in second-language acquisition is closely related to several disciplines including linguisticssociolinguisticspsychologyneuroscience, and education, and consequently most theories of second-language acquisition can be identified as having roots in one of them. Each of these theories can be thought of as shedding light on one part of the language learning process; however, no one overarching theory of second-language acquisition has yet been widely accepted by researchers.

History[edit]

As second-language acquisition began as an interdisciplinary field, it is hard to pin down a precise starting date.[1] However, there are two publications in particular that are seen as instrumental to the development of the modern study of SLA: Pit Corder's 1967 essay The Significance of Learners' Errors, and Larry Selinker's 1972 article Interlanguage. Corder's essay rejected a behaviorist account of SLA and suggested that learners made use of intrinsic internal linguistic processes; Selinker's article argued that second-language learners possess their own individual linguistic systems that are independent from both the first and second languages.[2]

In the 1970s the general trend in SLA was for research exploring the ideas of Corder and Selinker, and refuting behaviorist theories of language acquisition. Examples include research into error analysis, studies in transitional stages of second-language ability, and the "morpheme studies" investigating the order in which learners acquired linguistic features. The 70s were dominated by naturalistic studies of people learning English as a second language.[2]

By the 1980s, the theories of Stephen Krashen had become the prominent paradigm in SLA. In his theories, often collectively known as the Input Hypothesis, Krashen suggested that language acquisition is driven solely by comprehensible input, language input that learners can understand. Krashen's model was influential in the field of SLA and also had a large influence on language teaching, but it left some important processes in SLA unexplained. Research in the 1980s was characterized by the attempt to fill in these gaps. Some approaches included Lydia White's descriptions of learner competence, and Manfred Pienemann's use of speech processing models and lexical functional grammar to explain learner output. This period also saw the beginning of approaches based in other disciplines, such as the psychological approach of connectionism.[2]

The 1990s saw a host of new theories introduced to the field, such as Michael Long's interaction hypothesisMerrill Swain's output hypothesis, and Richard Schmidt's noticing hypothesis. However, the two main areas of research interest were linguistic theories of SLA based upon Noam Chomsky's universal grammar, and psychological approaches such as skill acquisition theory and connectionism. The latter category also saw the new theories of processability and input processing in this time period. The 1990s also saw the introduction of sociocultural theory, an approach to explain second-language acquisition in terms of the social environment of the learner.[2]

In the 2000s research was focused on much the same areas as in the 1990s, with research split into two main camps of linguistic and psychological approaches. VanPatten and Benati do not see this state of affairs as changing in the near future, pointing to the support both areas of research have in the wider fields of linguistics and psychology, respectively.[2]

Semantic theory[edit]

This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2012)

For the second-language learner, the acquisition of meaning is arguably the most important task. Meaning it is the heart of a language, not the exotic sounds or elegant sentence structure. There are several types of meanings: lexical, grammatical, semantic, and pragmatic. All the different meaning contributing to the acquisition to the meaning of generally having the integral second language possession. [3]

Lexical meaning – meaning that is stored in our mental lexicon;

Grammatical meaning – comes into consideration when calculating the meaning of a sentence; usually encoded in inflectional morphology (ex. - ed for past simple, -‘s for third person possessive)

Semantic meaning – word meaning;

Pragmatic meaning – meaning that depends on context, requires knowledge of the world to decipher; for example, when someone asks on the phone, “Is Mike there?” he doesn’t want to know if Mike is physically there; he wants to know if he can talk to Mike.

Sociocultural theory[edit]

Sociocultural theory was originally coined by Wertsch in 1985 and derived from the work of Lev Vygotsky and the Vygotsky Circle in Moscow from the 1920s onwards. Sociocultural theory is the notion that human mental function is from participating cultural mediation integrated into social activities. [4]

Universal grammar[edit]

Main article: Universal grammar

From the field of linguistics, the most influential theory by far has been Chomsky's theory of Universal Grammar (UG). The UG model of principles, basic properties which all languages share, and parameters, properties which can vary between languages, has been the basis for much second-language research.

From a UG perspective, learning the grammar of a second language is simply a matter of setting the correct parameters. Take the pro-drop parameter, which dictates whether or not sentences must have a subject in order to be grammatically correct. This parameter can have two values: positive, in which case sentences do not necessarily need a subject, and negative, in which case subjects must be present. In German the sentence "Er spricht" (he speaks) is grammatical, but the sentence "Spricht" (speaks) is ungrammatical. In Italian, however, the sentence "Parla" (speaks) is perfectly normal and grammatically correct.[5] A German speaker learning Italian would only need to deduce that subjects are optional from the language he hears, and then set his pro-drop parameter for Italian accordingly. Once he has set all the parameters in the language correctly, then from a UG perspective he can be said to have learned Italian, i.e. he will always produce perfectly correct Italian sentences.

Universal Grammar also provides a succinct explanation for much of the phenomenon of language transfer. Spanish learners of English who make the mistake "Is raining" instead of "It is raining" have not yet set their pro-drop parameters correctly and are still using the same setting as in Spanish.

The main shortcoming of Universal Grammar in describing second-language acquisition is that it does not deal at all with the psychological processes involved with learning a language. UG scholarship is only concerned with whether parameters are set or not, not with how they are set.

Input hypothesis[edit]

Main article: Comprehensible input

Learners' most direct source of information about the target language is the target language itself. When they come into direct contact with the target language, this is referred to as "input." When learners process that language in a way that can contribute to learning, this is referred to as "intake."

Generally speaking, the amount of input learners take in is one of the most important factors affecting their learning. However, it must be at a level that is comprehensible to them. In his Monitor Theory, Krashen advanced the concept that language input should be at the "i+1" level, just beyond what the learner can fully understand; this input is comprehensible, but contains structures that are not yet fully understood. This has been criticized on the basis that there is no clear definition of i+1, and that factors other than structural difficulty (such as interest or presentation) can affect whether input is actually turned into intake. The concept has been quantified, however, in vocabulary acquisition research; Nation reviews various studies which indicate that about 98% of the words in running text should be previously known in order for extensive reading to be effective.[6]

In his Input Hypothesis, Krashen proposes that language acquisition takes place only when learners receive input just beyond their current level of L2 competence. He termed this level of input “i+1.” However, in contrast to emergentist and connectionist theories, he follows the innate approach by applying Chomsky’s Government and binding theoryand concept of Universal grammar (UG) to second-language acquisition. He does so by proposing a Language Acquisition Device that uses L2 input to define the parameters of the L2, within the constraints of UG, and to increase the L2 proficiency of the learner. In addition, Krashen (1982)’s Affective Filter Hypothesis holds that the acquisition of a second language is halted if the learner has a high degree of anxiety when receiving input. According to this concept, a part of the mind filters out L2 input and prevents uptake by the learner, if the learner feels that the process of SLA is threatening. As mentioned earlier, since input is essential in Krashen’s model, this filtering action prevents acquisition from progressing.

A great deal of research has taken place on input enhancement, the ways in which input may be altered so as to direct learners' attention to linguistically important areas. Input enhancement might include bold-faced vocabulary words or marginal glosses in a reading text. Research here is closely linked to research on pedagogical effects, and comparably diverse.

Monitor model[edit]

Main article: Monitor hypothesis

Other concepts have also been influential in the speculation about the processes of building internal systems of second-language information. Some thinkers hold that language processing handles distinct types of knowledge. For instance, one component of the Monitor Model, propounded by Krashen, posits a distinction between “acquisition” and “learning.”[7] According to Krashen, L2 acquisition is a subconscious process of incidentally “picking up” a language, as children do when becoming proficient in their first languages. Language learning, on the other hand, is studying, consciously and intentionally, the features of a language, as is common in traditional classrooms. Krashen sees these two processes as fundamentally different, with little or no interface between them. In common with connectionism, Krashen sees input as essential to language acquisition.[7]

Further, Bialystok and Smith make another distinction in explaining how learners build and use L2 and interlanguage knowledge structures.[8] They argue that the concept of interlanguage should include a distinction between two specific kinds of language processing ability. On one hand is learners’ knowledge of L2 grammatical structure and ability to analyze the target language objectively using that knowledge, which they term “representation,” and, on the other hand is the ability to use their L2 linguistic knowledge, under time constraints, to accurately comprehend input and produce output in the L2, which they call “control.” They point out that often non-native speakers of a language have higher levels of representation than their native-speaking counterparts have, yet have a lower level of control. Finally, Bialystok has framed the acquisition of language in terms of the interaction between what she calls “analysis” and “control.”[9] Analysis is what learners do when they attempt to understand the rules of the target language. Through this process, they acquire these rules and can use them to gain greater control over their own production.

Monitoring is another important concept in some theoretical models of learner use of L2 knowledge. According to Krashen, the Monitor is a component of an L2 learner’s language processing device that uses knowledge gained from language learning to observe and regulate the learner’s own L2 production, checking for accuracy and adjusting language production when necessary.[7]

Interaction Hypothesis[edit]

Main article: Interaction Hypothesis

Long's interaction hypothesis proposes that language acquisition is strongly facilitated by the use of the target language in interaction. Similarly to Krashen's Input Hypothesis, the Interaction Hypothesis claims that comprehensible input is important for language learning. In addition, it claims that the effectiveness of comprehensible input is greatly increased when learners have to negotiate for meaning.[10]

Interactions often result in learners receiving negative evidence.[10][11] That is, if learners say something that their interlocutors do not understand, after negotiation the interlocutors may model the correct language form. In doing this, learners can receive feedback on their production and on grammar that they have not yet mastered.[10] The process of interaction may also result in learners receiving more input from their interlocutors than they would otherwise.[11] Furthermore, if learners stop to clarify things that they do not understand, they may have more time to process the input they receive. This can lead to better understanding and possibly the acquisition of new language forms.[10]Finally, interactions may serve as a way of focusing learners' attention on a difference between their knowledge of the target language and the reality of what they are hearing; it may also focus their attention on a part of the target language of which they are not yet aware.[12]

Output hypothesis[edit]

Main article: Comprehensible output

In the 1980s, Canadian SLA researcher Merrill Swain advanced the output hypothesis, that meaningful output is as necessary to language learning as meaningful input. However, most studies have shown little if any correlation between learning and quantity of output. Today, most scholars[citation needed] contend that small amounts of meaningful output are important to language learning, but primarily because the experience of producing language leads to more effective processing of input.

Competition model[edit]

Main article: Competition model

Some of the major cognitive theories of how learners organize language knowledge are based on analyses of how speakers of various languages analyze sentences for meaning. MacWhinney, Bates, and Kliegl found that speakers of English, German, and Italian showed varying patterns in identifying the subjects of transitive sentences containing more than one noun.[13] English speakers relied heavily on word order; German speakers used morphological agreement, the animacy status of noun referents, and stress; and speakers of Italian relied on agreement and stress. MacWhinney et al. interpreted these results as supporting the Competition Model, which states that individuals use linguistic cues to get meaning from language, rather than relying on linguistic universals.[13] According to this theory, when acquiring an L2, learners sometimes receive competing cues and must decide which cue(s) is most relevant for determining meaning.

Connectionism and second-language acquisition[edit]

See also: Connectionism

These findings also relate to Connectionism. Connectionism attempts to model the cognitive language processing of the human brain, using computer architectures that make associations between elements of language, based on frequency of co-occurrence in the language input.[14] Frequency has been found to be a factor in various linguistic domains of language learning.[15] Connectionism posits that learners form mental connections between items that co-occur, using exemplars found in language input. From this input, learners extract the rules of the language through cognitive processes common to other areas of cognitive skill acquisition. Since connectionism denies both innate rules and the existence of any innate language-learning module, L2 input is of greater importance than it is in processing models based on innate approaches, since, in connectionism, input is the source of both the units and the rules of language.

Noticing hypothesis[edit]

Main article: Noticing hypothesis

Attention is another characteristic that some believe to have a role in determining the success or failure of language processing. Richard Schmidt states that although explicit metalinguistic knowledge of a language is not always essential for acquisition, the learner must be aware of L2 input in order to gain from it.[16][broken citation] In his “noticing hypothesis,” Schmidt posits that learners must notice the ways in which their interlanguage structures differ from target norms. This noticing of the gap allows the learner’s internal language processing to restructure the learner’s internal representation of the rules of the L2 in order to bring the learner’s production closer to the target. In this respect, Schmidt’s understanding is consistent with the ongoing process of rule formation found in emergentism and connectionism.

Processability[edit]

Main article: Processability theory

Some theorists and researchers have contributed to the cognitive approach to second-language acquisition by increasing understanding of the ways L2 learners restructure their interlanguage knowledge systems to be in greater conformity to L2 structures. Processability theory states that learners restructure their L2 knowledge systems in an order of which they are capable at their stage of development.[17] For instance, In order to acquire the correct morphological and syntactic forms for English questions, learners must transform declarative English sentences. They do so by a series of stages, consistent across learners. Clahsen proposed that certain processing principles determine this order of restructuring.[18] Specifically, he stated that learners first, maintain declarative word order while changing other aspects of the utterances, second, move words to the beginning and end of sentences, and third, move elements within main clauses before subordinate clauses.

Automaticity[edit]

Thinkers have produced several theories concerning how learners use their internal L2 knowledge structures to comprehend L2 input and produce L2 output. One idea is that learners acquire proficiency in an L2 in the same way that people acquire other complex cognitive skills. Automaticity is the performance of a skill without conscious control. It results from the gradated process of proceduralization. In the field of cognitive psychology, Anderson expounds a model of skill acquisition, according to which persons use procedures to apply their declarative knowledge about a subject in order to solve problems.[19] On repeated practice, these procedures develop into production rules that the individual can use to solve the problem, without accessing long-term declarative memory. Performance speed and accuracy improve as the learner implements these production rules. DeKeyser tested the application of this model to L2 language automaticity.[20] He found that subjects developed increasing proficiency in performing tasks related to the morphosyntax of an artificial language, Autopractan, and performed on a learning curve typical of the acquisition of non-language cognitive skills. This evidence conforms to Anderson’s general model of cognitive skill acquisition, supports the idea that declarative knowledge can be transformed into procedural knowledge, and tends to undermine the idea of Krashen[7] that knowledge gained through language “learning” cannot be used to initiate speech production.

Declarative/procedural model[edit]

Michael T. Ullman has used a declarative/procedural model to understand how language information is stored. This model is consistent with a distinction made in general cognitive science between the storage and retrieval of facts, on the one hand, and understanding of how to carry out operations, on the other. It states that declarative knowledge consists of arbitrary linguistic information, such as irregular verb forms, that are stored in the brain’s declarative memory. In contrast, knowledge about the rules of a language, such as grammatical word order is procedural knowledge and is stored in procedural memory. Ullman reviews several psycholinguistic and neurolinguistic studies that support the declarative/procedural model.[21]

Memory and second-language acquisition[edit]

Perhaps certain psychological characteristics constrain language processing. One area of research is the role of memory. Williams conducted a study in which he found some positive correlation between verbatim memory functioning and grammar learning success for his subjects.[22] This suggests that individuals with less short-term memory capacity might have a limitation in performing cognitive processes for organization and use of linguistic knowledge.

14. Second Language acquisition theories

Like any other type of learning, language learning is not a linear process, and therefore cannot be deemed as predictable as many models of SLA have hypothesized it to be. Countless theories have been developed to explain SLA, but most such theories focus merely on the acquisition of syntactic structures and ignore other important aspects. Eight of second language acquisition theories: behaviourism, acculturation, universal grammar hypothesis, comprehension hypothesis, interaction hypothesis, output hypothesis, sociocultural theory and connectionism. Considered that caused more impact in the field.

Behaviourism gave birth to a stimulus-response (S-R) theory which understands language as a set of structures and acquisition as a matter of habit formation.Thus to acquire a language is to acquire automatic linguistic habits. According to Johnson, “behaviourism undermined the role of mental processes and viewed learning as the ability to inductively discover patterns of rule-governed behaviour from the examples provided to the learner by his or her environment.”

Acculturation another environmental-oriented theory is proposed by Schumann. He found out that “the subject who acquired the least amount of English was the one who was the most socially and psychologically distant from the TL group.”In his view, SLA is the result of acculturation which he defines as “the social and psychological integration of the learner with the target language (TL) group”. The acculturation model argues that learners will be successful in SLA if there are fewer social and psychological distances between them and the speakers of the second language.

Universal grammar hypothesis as a counterpoint to the environmental perspective, Chomsky’s followers try to understand SLA in the light of his universal grammar (UG) theory, a human innate endowment. Chomsky is interested in the nature of language and sees language as a mirror of the mind.According to his theory, every human being is biologically endowed with a language faculty, the language acquisition device, which is responsible for the initial state of language development. The UG theory considers that the input from the environment is insufficient to account for language acquisition.

Comprehension hypothesis influenced by Chomsky’s assumptions on language as an innate faculty, Krashen, developed an influential proposal with emphasis on the contrast between learning and acquisition to explain SLA.The Comprehension hypothesis refers to subconscious acquisition and not to conscious learning. The result of providing acquirers with comprehensible input is the emergence of grammatical structure in a predictable order. A strong affective filter (e.g. high anxiety) will prevent input from reaching those parts of the brain that do language acquisition.

Interaction hypothesis other attempts to explain SLA are the different versions of the interaction hypothesis defended by Hatch and by Long, to name but two who did not accept Krashen’s Input Hypothesis. “One learns how to do conversation, one learns how to interact verbally, and out of this interaction syntactic structures are developed.”Based on an empirical study, Long observed that in conversations between native and non-native speakers, there are more modifications in interaction than in the input provided by the native speakers. He does not reject the positive role of modified input, but claims that modifications in interactions are consistently found in successful SLA.

Sociocultural theory based on Vygotskian thoughts, claims that language learning is a socially mediated process. Mediation is a fundamental principle and language is a cultural artifact that mediates social and psychological activities.“From a social-cultural perspective, children’s early language learning arises from processes of meaning-making in collaborative activity with other members of a given culture”.

PRAGMATICS

Pragmatics is a subfield of linguistics and semiotics that studies the ways in which context contributes to meaning. Pragmatics encompasses speech act theory, conversational implicaturetalk in interaction and other approaches to language behavior inphilosophysociologylinguistics and anthropology.[1] Unlike semantics, which examines meaning that is conventional or "coded" in a given language, pragmatics studies how the transmission of meaning depends not only on structural and linguistic knowledge (e.g.,grammarlexicon, etc.) of the speaker and listener, but also on the context of the utterance, any pre-existing knowledge about those involved, the inferred intent of the speaker, and other factors.[2] In this respect, pragmatics explains how language users are able to overcome apparent ambiguity, since meaning relies on the manner, place, time etc. of an utterance.[1]

The ability to understand another speaker's intended meaning is called pragmatic competence.[3][4][5]

The sentence "You have a green light" is ambiguous. Without knowing the context, the identity of the speaker, and his or her intent, it is difficult to infer the meaning with confidence. For example:

Similarly, the sentence "Sherlock saw the man with binoculars" could mean that Sherlock observed the man by using binoculars, or it could mean that Sherlock observed a man who was holding binoculars (syntactic ambiguity).[6] The meaning of the sentence depends on an understanding of the context and the speaker's intent. As defined in linguistics, a sentence is an abstract entity — a string of words divorced from non-linguistic context — as opposed to an utterance, which is a concrete example of a speech act in a specific context. The closer conscious subjects stick to common words, idioms, phrasings, and topics, the more easily others can surmise their meaning; the further they stray from common expressions and topics, the wider the variations in interpretations. This suggests that sentences do not have meaning intrinsically; there is not a meaning associated with a sentence or word, they can only symbolically represent an idea. The cat sat on the mat is a sentence in English. If someone were to say to someone else, "The cat sat on the mat," this is an example of an utterance. Thus, there is no such thing as a sentence, term, expression or word symbolically representing a single true meaning; it is underspecified (which cat sat on which mat?) and potentially ambiguous. The meaning of an utterance, on the other hand, is inferred based on linguistic knowledge and knowledge of the non-linguistic context of the utterance (which may or may not be sufficient to resolve ambiguity). In mathematics with Berry's paradox there arose a systematic ambiguity with the word "definable". The ambiguity with words shows that the descriptive power of any human language is limited.

Etymology[edit]

The word pragmatics derives via Latin pragmaticus from the Greek πραγματικός (pragmatikos), meaning amongst others "fit for action",[7] which comes from πρᾶγμα (pragma), "deed, act",[8] and that from πράσσω (prassō), "to pass over, to practise, to achieve".[9]

Origins[edit]

Pragmatics was a reaction to structuralist linguistics as outlined by Ferdinand de Saussure. In many cases, it expanded upon his idea that language has an analyzable structure, composed of parts that can be defined in relation to others. Pragmatics first engaged only in synchronic study, as opposed to examining the historical development of language. However, it rejected the notion that all meaning comes from signs existing purely in the abstract space of langue. Meanwhile, historical pragmatics has also come into being.

Areas of interest[edit]

Referential uses of language[edit]

When we speak of the referential uses of language we are talking about how we use signs to refer to certain items. Below is an explanation of, first, what a sign is, second, how meanings are accomplished through its usage.

A sign is the link or relationship between a signified and the signifier as defined by Saussure and Huguenin. The signified is some entity or concept in the world. The signifier represents the signified. An example would be:
Signified: the concept cat
Signifier: the word "cat"
The relationship between the two gives the sign meaning. This relationship can be further explained by considering what we mean by "meaning." In pragmatics, there are two different types of meaning to consider: semantico-referential meaning and indexical meaning. Semantico-referential meaning refers to the aspect of meaning, which describes events in the world that are independent of the circumstance they are uttered in. An example would be propositions such as:

"Santa Claus eats cookies."

In this case, the proposition is describing that Santa Claus eats cookies. The meaning of this proposition does not rely on whether or not Santa Claus is eating cookies at the time of its utterance. Santa Claus could be eating cookies at any time and the meaning of the proposition would remain the same. The meaning is simply describing something that is the case in the world. In contrast, the proposition, "Santa Claus is eating a cookie right now," describes events that are happening at the time the proposition is uttered.

Semantico-referential meaning is also present in meta-semantical statements such as:

Tiger: carnivorous, a mammal

If someone were to say that a tiger is a carnivorous animal in one context and a mammal in another, the definition of tiger would still be the same. The meaning of the sign tiger is describing some animal in the world, which does not change in either circumstance.

Indexical meaning, on the other hand, is dependent on the context of the utterance and has rules of use. By rules of use, it is meant that indexicals can tell you when they are used, but not what they actually mean.

Example: "I"

Whom "I" refers to depends on the context and the person uttering it.

As mentioned, these meanings are brought about through the relationship between the signified and the signifier. One way to define the relationship is by placing signs in two categories: referential indexical signs, also called "shifters," and pure indexical signs.

Referential indexical signs are signs where the meaning shifts depending on the context hence the nickname "shifters." 'I' would be considered a referential indexical sign. The referential aspect of its meaning would be '1st person singular' while the indexical aspect would be the person who is speaking (refer above for definitions of semantico-referential and indexical meaning). Another example would be:

"This"
Referential: singular count
Indexical: Close by

A pure indexical sign does not contribute to the meaning of the propositions at all. It is an example of a ""non-referential use of language.""

A second way to define the signified and signifier relationship is C.S. Peirce's Peircean Trichotomy. The components of the trichotomy are the following:

1. Icon: the signified resembles the signifier (signified: a dog's barking noise, signifier: bow-wow)
2. Index: the signified and signifier are linked by proximity or the signifier has meaning only because it is pointing to the signified
3. Symbol: the signified and signifier are arbitrarily linked (signified: a cat, signifier: the word cat)

These relationships allow us to use signs to convey what we want to say. If two people were in a room and one of them wanted to refer to a characteristic of a chair in the room he would say "this chair has four legs" instead of "a chair has four legs." The former relies on context (indexical and referential meaning) by referring to a chair specifically in the room at that moment while the latter is independent of the context (semantico-referential meaning), meaning the concept chair.

Non-referential uses of language[edit]

Silverstein's "pure" indexes[edit]

Michael Silverstein has argued that "nonreferential" or "pure" indices do not contribute to an utterance's referential meaning but instead "signal some particular value of one or more contextual variables."[10] Although nonreferential indexes are devoid of semantico-referential meaning, they do encode "pragmatic" meaning.

The sorts of contexts that such indexes can mark are varied. Examples include:

In all of these cases, the semantico-referential meaning of the utterances is unchanged from that of the other possible (but often impermissible) forms, but the pragmatic meaning is vastly different.

The performative[edit]

J.L. Austin introduced the concept of the performative, contrasted in his writing with "constative" (i.e. descriptive) utterances. According to Austin's original formulation, a performative is a type of utterance characterized by two distinctive features:

However, a performative utterance must also conform to a set of felicity conditions.

Examples:

Jakobson's six functions of language[edit]

Roman Jakobson, expanding on the work of Karl Bühler, described six "constitutive factors" of a speech event, each of which represents the privileging of a corresponding function, and only one of which is the referential (which corresponds to the context of the speech event). The six constitutive factors and their corresponding functions are diagrammed below.

The six constitutive factors of a speech event

Context

Message

Addresser---------------------Addressee

Contact

Code

The six functions of language

Referential

Poetic

Emotive-----------------------Conative

Phatic

Metalingual

Related fields[edit]

There is considerable overlap between pragmatics and sociolinguistics, since both share an interest in linguistic meaning as determined by usage in a speech community. However, sociolinguists tend to be more interested in variations in language within such communities.

Pragmatics helps anthropologists relate elements of language to broader social phenomena; it thus pervades the field of linguistic anthropology. Because pragmatics describes generally the forces in play for a given utterance, it includes the study of power, gender, race, identity, and their interactions with individual speech acts. For example, the study ofcode switching directly relates to pragmatics, since a switch in code effects a shift in pragmatic force.[12]

According to Charles W. Morris, pragmatics tries to understand the relationship between signs and their users, while semantics tends to focus on the actual objects or ideas to which a word refers, and syntax (or "syntactics") examines relationships among signs or symbols. Semantics is the literal meaning of an idea whereas pragmatics is the implied meaning of the given idea.

Speech Act Theory, pioneered by J.L. Austin and further developed by John Searle, centers around the idea of the performative, a type of utterance that performs the very action it describes. Speech Act Theory's examination of Illocutionary Acts has many of the same goals as pragmatics, as outlined above.

Formalization[edit]

There has been a great amount of discussion on the boundary between semantics and pragmatics [13] and there are many different formalizations of aspects of pragmatics linked to context dependence. Particular interesting cases are the discussions on the semantics of indexicals and the problem of referential descriptions, a topic developed after the theories of Keith Donnellan.[14] A proper logical theory of formal pragmatics has been developed by Carlo Dalla Pozza, according to which it is possible to connect classical semantics (treating propositional contents as true or false) and intuitionistic semantics (dealing with illocutionary forces). The presentation of a formal treatment of pragmatics appears to be a development of the Fregean idea of assertion sign as formal sign of the act of assertion.

In literary theory[edit]

Pragmatics (more specifically, Speech Act Theory's notion of the performative) underpins Judith Butler's theory of gender performativity. In Gender Trouble, she claims that gender and sex are not natural categories, but socially constructed roles produced by "reiterative acting."

In Excitable Speech she extends her theory of performativity to hate speech and censorship, arguing that censorship necessarily strengthens any discourse it tries to suppress and therefore, since the state has sole power to define hate speech legally, it is the state that makes hate speech performative.

Jacques Derrida remarked that some work done under Pragmatics aligned well with the program he outlined in his book Of Grammatology.

Émile Benveniste argued that the pronouns "I" and "you" are fundamentally distinct from other pronouns because of their role in creating the subject.

Gilles Deleuze and Félix Guattari discuss linguistic pragmatics in the fourth chapter of A Thousand Plateaus ("November 20, 1923--Postulates of Linguistics"). They draw three conclusions from Austin: (1) A performative utterance does not communicate information about an act second-hand—it is the act; (2) Every aspect of language ("semantics, syntactics, or even phonematics") functionally interacts with pragmatics; (3) There is no distinction between language and speech. This last conclusion attempts to refuteSaussure's division between langue and parole and Chomsky's distinction between surface structure and deep structure simultaneously. [15]


Wyszukiwarka

Podobne podstrony:
Pytania do obrony
PYTANIA DO OBRONY
pytania do obrony
25 + sciaga, PYTANIA DO OBRONY
14. Wady i zalety tradycyjnych i współczesnych elementów wyk, PYTANIA DO OBRONY
Pytania na egzamin z kolei, PYTANIA DO OBRONY, obrona mgr CZ 1 z 2
18. Projektowanie koncepcyjne konstrukcji prętowych w świetl, PYTANIA DO OBRONY
Pytania do obrony
pytania do obrony pracy
pytania do obrony projektu, Budownictwo studia, Konstrukcje Sprężone, Projekty Z Konstrukcji Sprężon
PYTANIA DO OBRONY MGR, SZKOŁA, szkola 2011
OPRACOWANE PYTANIA DO OBRONY1, pulpit
pytania do obrony z zakresu ekonomii, UEK, obrona mgr pytania
opracowane pytanie do obrony pracy licencjackiej
pytania do obrony wsb, SZKOŁA, SZKOŁA
Pytania do obrony 7 i 8
pytania do obrony skrot
Logistyka pytania do obrony

więcej podobnych podstron