Tarnish… it spreads

Print Friendly, PDF & Email

Tarnish … it spreads

An article published on the BBC website yesterday, Derby teacher banned over coursework plagiarism, reports an unusual situation, one which raises many questions.

Gavin Bevis reports on a teacher who has been banned from teaching for at least 7 years. It seems that in 2019 the teacher had inserted passages into two students’ coursework after they had submitted their work to the teacher and before she submitted it for assessment.   The passages had been taken from a student’s work submitted the year before, 2018, with just a few changes made.   The work was flagged as suspicious and investigation confirmed plagiarism.

Bevis’s report does not say what triggered the flagging, whether it was flagged by text-matching software such as Turnitin or similar tool or simply that the examiner/s reading the work felt that something in the writing did not ring true; perhaps it just rang bells (or perhaps it was a different cause of concern altogether). Whatever the trigger, investigation took place. The OCR examination board followed up with the school, the school followed up with the students.

The students claimed that they were not responsible for the changes, one of them saying that they had not completed that section of the exam and the other saying they had written about a different case-study.  So the school investigated more deeply and discovered that the teacher had printed out the earlier student’s work shortly before sending off the 2019 students’ coursework.

It is not often that we hear about teacher misconduct so we do not know how widespread it is – but then, although we often hear about student academic misconduct, we do not know how widespread it is. We hear only about the ones who get caught, some of the ones who get caught.

As so often, plagiarism specialist Jonathan Bailey was quick to spot and comment on the report. Under the headline UK Teacher Banned After Plagiarizing on Behalf of Students on his Plagiarism Today site, Bailey makes many pertinent points, raises many serious questions.

He suggests, for instance, that the teacher’s action could have damaged the two students’ reputations and prospects had the school not persevered and discovered evidence incriminating the teacher.  I wonder how many other schools would have been so determined, investigating further when students claim innocence, as that is what guilty students often do, maintain that they are innocent?

Bailey raises two big questions. One is why it took the Teaching Regulation Agency so long to come to its decision; although the teacher was suspended early in this saga, the lack of conclusion would have been a concern for the school, the students and the teacher herself.

Bailey’s second question is to ask if the school has investigated the teacher’s earlier history; she had been at the school for 12 years and it is legitimate to ask, had she done this before, how many times, how many years?

I would ask further questions: why these two students? were there other students in the 2019 cohort whom she had “helped”?  were and are there other teachers in the school who “help” their students, in these or in other unethical ways, is this in the school culture? can we trust the school’s examinations’ record? can we trust the validity of the coursework scores of students taught by this teacher, in this or in previous schools? Suspicion may not be warranted or deserved but it spreads.

As Bailey suggests:

To prevent future cases of academic integrity from slipping through the cracks, you can’t just study the cases you catch, you have to look at the ones you miss.

If you do not mind the leap, AI detectors are very much in the news just recently, some apparently better at discerning GenAI content than others and none perfect.  The concern seems true of them as well (and also of text-matching software, the so-called plagiarism detectors).  It is not just the ones they catch that demonstrate efficiency and reliability, it is the ones the miss, and there is no way to estimate those. But that is another story, for another day.

Author, author! Author…?

Print Friendly, PDF & Email

My last article, To be verified…, centred on an item in The Times which claimed that the International Baccalaureate (IB) was or would be allowing students to use ChatGPT and other forms of artificial intelligence (AI) in essays and other work, as long as the use of such tools was acknowledged and attributed appropriately.

The news turned out to be true; an article by Matt Glanville, Head of Assessment Principles and Practice at the IB, was published on the same day, and may well have been the source of The Times’ reporter’s story.  Titled Artificial intelligence in IB assessment and education: a crisis or an opportunity?, it provides deeper and more thoughtful detail and consideration than the story in The Times, including a rationale for the decision to allow its use and thoughts on how it might change learning and teaching and the purpose of assessment.  For those with access to My IB, Appendix 6 of IB’s newly updated Academic Integrity Policy provides much for educationists to think about, and requires that use of AI be acknowledged and, when used in assessments and coursework must be cited and referenced.

Elsewhere, on Linkedin, IB has published a slide set Guidance for students on referencing AI, with the first slide reading “How IB students can correctly (sic) reference AI tools like ChatGPT”.  (I am not so sure that “correctly” is the right word, mainly because it implies that there is just one “correct” way to reference AI tools regardless of which style guide is being used for the rest of the work.  I am not sure about the helpfulness of the examples used in the slides, but that is very much another matter.)

Not everyone in education agrees, on whether AI can be used or not and on the efficacy of AI-generated text detection software.  The i newspaper reports Oxford and Cambridge ban ChatGPT over plagiarism fears but other universities embrace AI bot, The Guardian declares Australian universities split on using new tool to detect AI plagiarism.

Publishers have different takes on AI as well, raising some interesting and paradoxical considerations:  if we require tools such as ChatGPT to be cited and referenced, does this give them some form of authority?  Authority implies responsibility and, dare I say it, authorship – but can ChatGPT be an author? If it can be regarded as an author, can it then be a co-author if it has significantly contributed to a study and/or its resulting article or paper?   

The first sentence of that previous paragraph may need revision.  Instead of starting “Publishers have different takes on AI as well” it might be more accurate to say “Publishers had different takes on AI as well”.  When ChatGPT first became widely known, some publishers seemed very ready to accept ChatGPT as author or co-author; in January 2023, the journal Nature  carried a News article ChatGPT listed as author on research papers: many scientists disapprove : At least four articles credit the AI tool as a co-author, as publishers scramble to regulate its use.

One of those articles may well have been published in Nurse Education in Practice,  part of the Elsevier stable, in January 2023: Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? originally recorded two authors, Siobhan O’Connor and ChatGPT.

Elsevier has had second thoughts. In February 2023 a Corrigendum was made. Without saying what had been corrected, the paper now shows Siobhan O’Connor as sole author.

[As an aside but for further consideration, I am concerned that there is no obvious indication on the original article showing that a correction had been made; clicking on the Show more indicator reveals a message and an invitation to check for updates, an Erratum message and a link to the Corrigendum, but I do wonder why the correction is not more obvious.]

Academic publishers may be clearer now in their views on the inclusion of AI tools as authors. 

As instance, journals in the Science stable (published by the American Association for the Advancement of Science – AASL) do not accept AI as author or co-author.  , Holden Thorp, editor of Science, stated in ChatGPT is fun, but not an author that  AASL’s Editorial Policies require authors to have agency and to take responsibility for their contributions;  since artificial intelligence lacks agency and cannot be held responsible for its output, it cannot be cited as an author.

In a position statement on Authorship and AI tools, the UK Committee on Publication Ethics (COPE) also advises that AI cannot take responsibility for its output and therefore cannot be named as an author or co-author of a paper; its use must instead be acknowledged in the Methods or other appropriate section of a paper.

The lines “A paradox / A most ingenious paradox!” (as Ruth sings in The Pirates of Penzance) come to mind. Scholarly publishers demand that we do not cite and reference AI tools, the IB (and probably other educational bodies which allow use of AI as well) requires citations and references.

So we come to the question, what do the major referencing style guides say?

Perhaps not surprisingly, the major style guides also give different advice.  APA, for instance, gives advice on how to cite and reference AI when this is required by instructors, while recognising that many instructors either forbid its use or strongly urge caution on those who do use it (How to cite ChatGPT).  I think APA’s original advice was to treat ChatGPT output as a personal communication, cited in the text but not included in the reference list as it is a non-retrievable source – but the advice in this blog article is different; I wonder if I am thinking of advice given in libguides and by other gurus, based on how they thought APA might handle ChatGPT output. 

APA’s current advice, as stated in How to cite ChatGPT,  is to reference it as an algorithm:

Quoting ChatGPT’s text from a chat session is therefore more like sharing an algorithm’s output; thus, credit the author of the algorithm with a reference list entry and the corresponding in-text citation.

The example given is

When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).

Reference

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

The article also notes that the APA Style team is in discussion with the editors of the journals published by APA and will issue (more) definitive guidance later this year.

APA’s stance contrasts with that of the Chicago Manual of Style, which prefers the notion of a personal communication.  The CMOS page Citation, Documentation of Sources gives examples of both footnote and in-text citations of AI source material, but advises

But don’t cite ChatGPT in a bibliography or reference list. Though OpenAI assigns unique URLs to conversations generated from your prompts, those can’t be used by others to access the same content (they require your login credentials), making a ChatGPT conversation like an email, phone, or text conversation—or any other type of personal communication (see CMOS 14.214 and 15.53).

MLA takes the stance that AI cannot be treated as an author so a user of AI should treat its output as authorless, with the prompt (or a short-form of the prompt) used in the citation in the text and the full prompt included in the list of Works Cited (How do I cite generative AI in MLA style?).

Works-Cited-List Entry

“In 200 words, describe the symbolism of the green light in The
Great Gatsby” follow-up prompt to list sources. ChatGPT, 13
Feb. version, OpenAI, 9 Mar. 2023, chat.openai.com/chat.

APA and MLA (but not Chicago) caution that writers should verify whatever information they are given by an AI tool, whether AI gives a citation or not.  This is good practice – should be standard practice – whether it is AI or an online source or a print source being used. It is especially so while ChatGPT (and possibly other AI tools) is so notoriously given to hallucination, sometimes “making up” the information it gives, sometimes inventing its sources of information – and sometimes giving very accurate information without citing its sources.  “Go to the source – and then cite that” has always been good advice.

Clearly (and despite that IB slide set) there is more than one “correct” way to acknowledge, cite and/or reference use of artificial intelligence tools.  Best advice might be to use any examples in the published style guides as templates for whatever AI is being used and, for IB assessment, to include a bibliographic reference even when the style in use suggests that a reference is unnecessary.

Panic!

And still there is panic in educational circles.  Part of the concern is due to fears of plagiarism – it is all very well requiring students to document their use of AI tools, but what of students who use AI to produce their work in part or full but who do not declare it at all?

Almost an echo of the “how much plagiarism is acceptable?” non-question commonly asked in educational forums, now that Turnitin is flagging AI-produced content, some teachers are now asking “how much AI-produced content is acceptable?” and “how much AI-produced content is acceptable if it is cited and referenced?”!  (These are the gist of two questions raised in a post Clarity on the IB Guidelines on the use of AI Tools on My IB Programme Communities, so accessible only to those who can access My IB, I am afraid).

Leaving aside the issue of how accurate AI-detectors are,and reports of both false positives (material flagged as AI-generated when it is genuinely the work of the writer) and false negatives (material flagged as genuine when it is AI-generated), there is the issue (already mentioned here) that AI tools do not always report from where they obtained the information they output and, when they do, this may not be true or accurate.  This begs the question, if a student uses and cites ChatGPT when the software has plagiarised or invented its information and/or its sources,  is the student plagiarising or otherwise misleading the reader too?  Can the student be accused of plagiarising if they have cited their source, or secondary cited the source which ChatGPT claims to have used?

My own thought is probably not, not if the writer has cited the source, either directly to ChatGPT or with a “ChatGPT cites named source as saying. “bla bla bla …”.  The student is being honest about the source of the information – but that student may well be guilty of a lack of academic integrity by not digging deeper, not checking the veracity and accuracy of what has been garnered from the AI.  And again, this goes for use of any material, be it AI or online or print or broadcast or whatever – the integrity of the research is at risk if we do not check and verify.

There is a lack of honesty – and of integrity too – if there is no attempt to cite AI as the source of information, just as there are these deficiencies if the source is print or digital or online, just as there are deficiencies if writers reuse their own earlier work without stating this, self-plagiarism.  Plagiarism (and self-plagiarism) is two-sided. Not only do writers (or AI tools etc) whose work is used without attribution miss out on the credit which is their due, but those who read the plagiarised material lose out too – they are deceived into thinking that the current writer is responsible for the words, thoughts and information and given more credit than they deserve.

Jonathan Bailey’s blog post One Way AI Has Changed Plagiarism takes this line of thought further. Commenting on the criticism that CNET received when it revealed earlier this year that articles which it had published as written by “CNET Money Staff” were in fact AI-generated content, he suggests

The audience felt lied to, and for good reason. The fact no person was plagiarized from was unimportant, it was the lie (or the omission) that was the issue.

This cuts more to the fundamental issue of what plagiarism is. It is a lie. It is an author saying, either directly or implicitly, that the work is theirs and is original when, in fact, it is not. 

This puts the focus on what the actual act of plagiarism is. It’s not a sneaky attempt to deprive attribution, but an attempt to lie and pass off the work to others. 

With no direct victim, willing or not, the conversation can finally focus on that.

I think Bailey has captured and extended on what I have tried to say in several of my own posts, most recently in Back to basics, again, where I quote Heather Michael saying, in an IB video International-mindedness and the DP Core (also available on Vimeo)

I worry sometimes that people task the extended essay and sort of deliver it as a series of timelines as opposed to teaching students what it means to be a researcher (00.40).

As educators, we really should be concerned with process as well as product, helping students understand what it means to be a researcher – and thus why integrity is so important, is not just a matter of citing and referencing sources.  And of course, many of us are so concerned, including the readers of this blog. 

Being a researcher requires accuracy, transparency, thoughtfulness, honesty, integrity and more.  Being an authentic researcher means going to the source, checking and verifying, weighing and evaluating.  Something which gets us beyond pondering questions of authorship and considering the author themselves.

It sounds like hard work and maybe it is – but research is rewarding, research is fun and the result should be something genuine, helpful, something of which to be proud.

To be verified…

Print Friendly, PDF & Email

Half-listening to the news on BBC Radio 4 this morning, I was jerked to full attention during the regular quick look at the front pages of today’s UK newspapers. The Times has a front-page report declaring that the International Baccalaureate (IB) allowing students to use artificial intelligence to help them write their essays as long as they credit the AI used.

News to me!

Quick checks: the BBC News website includes front-page views of today’s newspapers (for a limited time only, possibly for copyright reasons). I took a screengrab.

The headline reads: Exams body lets pupils use AI chatbot to write essays.

The Times website carries the story as well – unfortunately behind a paywall, and The Times is not a newspaper to which I subscribe.

A quick Google check for [artificial intelligence international baccalaureate] – using the News feature and limiting the search to the last week found just one mention of the story –

Google search [artificial intelligence international baccalaureate]

the story in today’s The Times. There are several stories of students being punished for using artificial intelligence, even in IB schools.

Checks on the open IB website and in the closed-access My IB find no mention of this. It looks as if The Times has a world exclusive! (The thought that the newspaper had fallen victim to a hoax crossed my mind.)

Having bought a print copy of the newspaper, I wonder about the accuracy of the headline Exams body lets pupils use AI chatbot to write essays – that “lets” may be a trifle misleading. It implies that the IB already allows students to use AI in their work for assessment. The second paragraph states

Continue reading

Credit where it’s due

Print Friendly, PDF & Email

I cannot give credit to whoever coined the phrase “credit where it’s due”; I fear that is lost in the mists of time.

It is a common term in education and academia, but it was – and probably still is – more everyday than that, used to divert (often) praise away from oneself and on to someone more deserving, the person who wrote, made, did whatever

We often use the term in education, one of the reasons for citing one’s sources (at point of use in text), but I am not sure that students are always aware enough of what academic writing is all about to fully appreciate how helpful it can be.

This notion was brought home to me in a recent online workshop. Asked to design a poster or a slide sequence, several participants produced “citations” on the slides which were simply the URLs of the web pages (and occasionally the sites, but not the exact page) of the source of image or text they had used; references listed on the last slide or two also comprised URLs only.

Continue reading

Back to basics, again

Print Friendly, PDF & Email

News that CHATGPT had “sprinted” to one million users in just five days, exponentially faster than any other online service, has itself spread fast. The chart produced by Statista has been reproduced many many times, it is big news.

Articles about ChatGPT and AI generally seem to be increasing almost as fast, and my last post here, Here we are again!, just added to the number.  News that Google is about to launch its own chatbot, Bard, keeps the story much alive. Those commenting on developments in the AI field must feel that it is sometimes hard to keep up. 

Meanwhile, many in education and other fields fear that ChatGPT will make plagiarism and other forms of non-authentic work easier.  On the other hand, there are many, even in education, who see great potential in ChatGPT, see ways it can make their work easier. Some hold that it could lead to improved work and enhance critical thinking and student creativity.  At the same time, Courtney Cullen, in a post on the International Centre for Academic Integrity (ICAI) site, Artificial Intelligence: Friend, Foe, or Neither?, strikes a balance; shewelcomes “the increased focus on academic integrity” in educational circles.  We want our students to learn and show that they are learning, not simply to parrot, possibly unread, something generated by a machine.

Continue reading

Here we are again!

Print Friendly, PDF & Email

Since ChatGPT was first launched towards the end of 2022, there has been much alarm expressed in schools and colleges, in discussion forums, blogs and other social media platforms, in the educational press and in the general press too. There has also been calmer discussion; we shall come to that.

ChatGPT is an artificial intelligence (AI) text-generator, developed by OpenAI.  Its appearance marks a huge step forward in the evolution of AI.  To now, text-based AI has been uninspiring and flawed: think of the chatbots used by many support centres Continue reading

Who’s your friend?

Print Friendly, PDF & Email

One of the consequences of the death of Queen Elizabeth II last month is that over 800 individuals and companies who at the time of her death held a Royal Warrant for providing goods or services to senior members of the Royal Household need to re-apply for the warrant.  Many may lose their warrant if King Charles III (and any other member of the royal family whom he appoints as a grantor) does not share the Queen’s tastes or needs. In addition, the warrant is not granted for the lifetime of the royal who grants the honour, every warrant holder needs to re-apply every five years to ensure that the Royal Household still uses the product or service.

When a royal warrant is cancelled or expires, the ex-warrant holder must remove the royal insignia from their labels, letter-heads and anywhere else they display the arms and the message “By appointment to Her Majesty the Queen” or “By appointment to His Royal Highness the Prince of Wales” – declarations which must now be updated.  (For more information on this, see the Royal Warrants page of the Royal Family website or the FAQs page on the Royal Warrant Holders Association website.)

The Royal Warrant is, of course, highly prized and not easily obtained. Continue reading

Vanity, but not in vain

Print Friendly, PDF & Email

It has been a little while (okay, a long while) since I last posted here.  I am far from the only person who has had a difficult last few years, of course, but still.  I hope my personal situation is easing now and that I can fully get back into the swing of things.

I did start several blog posts during my long “sabbatical” and I may get round to completing them if they still seem relevant. What has sparked my interest now is, in a way, very personal, and conceited fool that I am, I can not resist sharing.

Many readers of this blog have accounts with platforms for sharing academic research and articles such as Academia.edu and ResearchGate, to access academic papers, contribute informally to the body of knowledge Continue reading

Another brick in the wall

Print Friendly, PDF & Email

I have come across an interesting twist in the contract cheating industry, Ghost Grading: Part 1 – A New Twist on Contract Cheating.  I hope I do not steal any of Dr Sarah Elaine Eaton’s thunder, especially as she still has Part 2 of her investigations to come, but the story is of interest.

It seems that teaching assistants and other instructors in North America (maybe elsewhere too?) are being targeted to outsource their grading duties.  The contract grading company gets paid by the TA at a rate lower than the TA receives from their institution, so the TA has money-in-hand without doing the work and also, as Eaton puts it, Continue reading

Takes your breath away…

Print Friendly, PDF & Email

News reports two days ago indicated that cigarette manufacturer Philip Morris’s takeover bid for Vectura, a UK manufacturer of lung health products, looks set to go through.  This  is not a matter of academic integrity and I am not sure about the integrity issues pure and simple either – but there are surely ethical considerations to ponder, and ponder I do.

Last month, discussing What’s not there, I wrote about e-smoking manufacturer Juul’s purchase of the May-June edition of the American Journal of Health Behavior (AJHB); the Special Open Access Issue on JUUL comprised eleven research studies and two editorial articles on JUUL, all attempting to provide Continue reading

What’s not there

Print Friendly, PDF & Email

In How to make the world add up: Ten rules for thinking differently about numbers,* economist Tim Harford’s Rule Six reads

Ask who is missing.

It is sound advice.  Too often, we are so busy thinking about what IS there that we forget to look for what IS NOT there.  Looking at studies and surveys and pondering their conclusions and implications, it is important to know who and what were surveyed and studied, where and when and how the investigations were carried out.  With surveys, we need to know the demographics of the sample investigated, since factors such as age, gender, place, ethnicity, religion, class or wealth, job or employment and many other factors including the size of the sample and how participants were chosen could affect conclusions about whatever is being studied, including considerations of whether those conclusions might – or might not – apply to those who were not studied, did not take part in the study.  Unless the sample includes everyone in the population, we cannot (at least, we should not) generalise and claim that whatever we have concluded applies universally.

Caroline Criado Perez makes this point over and over in her book, Invisible Women: Exposing Data Bias in a World Designed for Men.  I think this paragraph Continue reading

Posted in Uncategorized

Spinning it out

Print Friendly, PDF & Email

A few weeks ago, my eye was caught by an article in The Guardian, Overconfident of spotting fake news? If so, you may be more likely to fall victim.  Natalie Grover reported on a recent survey of 8285 Americans which suggests that 90% of participants thought that their ability to distinguish between fake and accurate headlines was above average, that those who had over-high perception of their abilities were more likely to visit websites which tended to publish false or inaccurate news items, and they were also more likely to share fake news; on the other hand, those who took a more thoughtful approach to their news reading were less likely to be misled by or to share inaccurate and false news reports.

There have been many studies of over-confidence in recent years.   It may be this misplaced self-confidence which leads students (and people generally) to go for and to use without question whatever comes up as Google hit number 1, this regardless of anything they have been told and taught about website evaluation.  It may be a form of cognitive dissonance – knowing that they have to slow down and think about what they find online while at the same time accepting what they find online without thinking about it.

Who needs those CRAAP and WISER and CARRDS or other evaluation tools?  Why bother to laterally read and think, or use Four Moves?   We do not need to think, we cannot be taken in, we know best.

Think again.

I am not sure about my own general news-reading habits, but I do know I tend to be Continue reading

Feeling the draft

Print Friendly, PDF & Email

News reporters who plagiarize their stories occasionally make the news themselves – when they are found out.  I was alerted to just such a story a few days ago. My alert service pointed to two short online reports and I had a look.  There were a couple of statements in those reports which puzzled me, they were so intriguing they got me looking for more details and for clarification.

I am not sure that I found clarification.  I did find more reports on the same story, some published a day or two later but quite a few published much earlier. The core of the story remained the same but each succeeding report I looked at seemed to add a different detail.  Unhelpfully, some of those extra details did not quite match the details of other reports.

And while I do not want to comment on the case itself, not least because there is an active legal case going on (the reporter is suing for unfair dismissal), I think there are general points which can be made and general questions to ask which are of interest with regard to honesty and integrity in education and academia.

Let’s dive in!

There is agreement on the basic situation Continue reading

Reader beware – different views of point

Print Friendly, PDF & Email

Do you use Reader View?  Do you recommend it to your students?  I often use Reader View when available, especially if I want to print out or save a PDF version of the page I am looking at and there is no ready-made PDF version already linked on the page.

Reader and Reader View are extensions or apps which enable “clean” views of the page you are looking at, keeping the textual matter but avoiding the advertisements, embedded videos, navigation and sidebar matter and other distractions.

Here, for instance, is a page on MacWorld, How to enable Reader View automatically for websites in mobile and desktop Safari:

The advertisements flicker and change, the video clip plays automatically and floats so that it is always on the screen, there are several more distractions as you scroll through the article.

These distractions disappear Continue reading

Nothing but …

Print Friendly, PDF & Email

Last week, I received an email message from Chegg, telling me they had recently changed their Terms of Service.  It was very much an in-your-face message, in Helvetica 21.  That is big.

The body of the message reads:

 

 

We have updated our Terms of Use.

The updates are effective as of March 17, 2021. They apply to all accountholders, so we encourage you to read the Terms of Use fully. Some of the updates include changes to the Dispute Resolution section, the Arbitration Agreement, and to the procedures for filing a dispute against Chegg. The Terms of Use can be found here.

If you do not wish to have these Dispute Resolution updates apply to you, you must notify us as described in the Terms of Use within 30 days of their effective date.

 

 

 

 

It is a very carefully worded message. We are urged to “read the Terms of Use fully” and are told that “some of the updates include changes to” three specific areas of the Terms of Use, all three dealing with problems arising from using Chegg services and procedures in case of  dispute.   Note that use of “some of the updates include changes to…” – note that “some.”  The implication is that there may be other updates, other changes, but they are not mentioned in the email.

Nor are they listed on the Terms of Use page. There is no summary of changes made, no indication of what the previous terms were for comparison purposes.  Nor is there any indication of what, outside the dispute procedures, has also changed – just that note in the email suggesting that there have been changes elsewhere in the Terms of Use.  It is for the user to find them, “we encourage you to read the Terms of Use fully.”

There are 47 topics in the Terms of Use, more than 14,000 words on the page – Continue reading

Tempting snakes

Print Friendly, PDF & Email

It is some time since I last wrote about Viper, a free service which called itself a “plagiarism checker,”  housed on a site called ScanMyEssay.  It is worth writing again, because there are a number of changes in Viper’s  services and in the Viper business model.

In those earlier posts, Authentic Authenticity (published September 2013)  and Snake (in the grass) (March 2016), I advised against Viper because among other things: Continue reading

MLA9 already – and already mixed feelings

Print Friendly, PDF & Email

it does not seem long since the Modern Language Association published its 8th edition (MLA8) – but I see that it was released as much as 5 years ago, April 2016. Now, next month sees publication of MLA9, the 9th edition of the MLA Handbook – and yesterday MLA hosted a webinar preview of the new edition.

I well remember my excitement and delight, as that edition seemed revolutionary (as I wrote in MLA8 – new edition of MLA Handbook and Back to basics – MLA8 revisited).  Instead of presenting lots of rules and variations from and exceptions to the rules in an attempt to include all types of known (and unknown) source, format, medium, platform and more, we were given a template to follow with which we could build the references which informed our lists of Works Cited, while still being faithful to the rationale and the principles of academic referencing and supporting our readers.  This was empowering, it was liberating.

The principles of MLA8 citation and referencing are Continue reading

The integrity of integrity

Print Friendly, PDF & Email

One of my neighbours was livid earlier this week. The council recycling collection team had not emptied his recycling box. We leave our recycling boxes at the roadside for collection; everyone else’s recycling had been collected, our boxes emptied, but not his.  A large tag tied to the handle explained why:  the recycling was contaminated.

Someone, presumably a passer-by, had deposited a polystyrene carton and the remains of a take-away meal in the recycling box. The whole box was deemed contaminated and could not be taken for processing.

Contamination of recycling is a problem. If not caught Continue reading

Cheap Shots

Print Friendly, PDF & Email

It is easy to take pot-shots at EasyBib. They make it too easy, as I have suggested many times over the years.  They have an imperfect citation generator which frequently churns out incorrectly-formatted citations (especially in auto-citation mode). They give wrong advice in their guides to citation styles. They have produced many flawed add-ons which attempt to enable “Smarter Research. Powered by You,” such as their Research and Essaycheck services (both of which were abandoned some years ago; the links here go to the Internet Archive records).  Their grammar and spelling checkers need to be used with great care – but that goes for many, probably most, possibly all grammar and spelling checkers.

[Among my various blog posts whch mention EasyBib, Getting it wrong…, Not so easy does it, APA mythtakes  and Not such a wise OWL are particularly pertinent here.)

As I say, EasyBib makes it easy to shoot ’em down.  I probably would not have bothered this time, except that, clearing my inboxes (long overdue), I came across an EasyBib blog post which Continue reading

Stylistically speaking

Print Friendly, PDF & Email

A pedant myself, I was naturally attracted to an article by Elizabeth Ribbans in the Guardian this week: the headline read COVID or Covid? The comfort of pedantry at a time of national crisis.

Ribbans is the newspaper’s readers’ editor; her team is responsible for fact-checking, correcting copy and dealing with readers’ questions, comments and complaints. The question which inspired the headline was from a medical specialist who asked why the Guardian insisted on using Covid-19 when the medical profession uses COVID-19.

Ribbans explains that it is the Guardian‘s practice, along with many if not most British newspapers,

to use uppercase for abbreviations that are written and spoken as a collection of letters, such as BBC, IMF and NHS, whereas acronyms pronounced as words go upper and lower, eg Nasa, Unicef and, now, Covid-19.

(This is, incidentally, a practice I abhor. “Nasa” and “Unicef” are not words even if their abbreviations/ acronyms can be pronounced; when I see them spelled as “NASA” and “UNICEF” I am aware of the full title of the body and its responsibilities, just as I am aware of who the BBC, IMF and NHS are and what they do. Continue reading

Avoid like the plague…

Print Friendly, PDF & Email

It’s an ill wind, they say, an ill wind which blows nobody any good.

Covid-19 / coronavirus is spreading, more people are affected, the global death toll keeps rising, and at exponetial rates.  Businesses are closing, in some cases for good.  Parents are having to stay at home to look after children whose schools are closed. Stay indoors, do not go out unnecessarily, keep your distance, wash your hands.  The times are grim, the news is grim, we are all indirectly and directly affected (and if we aren’t affected yet, we will be).

The times are bringing out the worst in us, the times are bringing out the best in us.  While many selfishly rush to stockpile and the shops empty and more are happy to flout emergency regulations, we also see much that makes us proud : the selfless dedication of medical personnel and others in key services, new community awareness, measures of environmental recovery too.  These may be bad times but there is much that is good too, generosity and compassion..

Even cheat sites are playing their part. Well, one at least is. A special offer in the face of global catastrophe, Continue reading

None too advanced

Print Friendly, PDF & Email

In my last post, Guest what?, I described how I got intrigued by an article extolling the virtues of online essay writing services. It was posted on a website devoted to trashing the Royal Dutch Shell oil company. The article seemed so very out-of-place that I started investigating, both the gripe site itself as well as article.

Although the article, 10 Interesting Facts about Online Essay Writing Services, reads as if talking about essay writing services in general, it gives no names, no  examples. There is, however, a single hyperlink to one of these services.  It links to a site well worth looking at more closely. It might even be worth sharing and discussing with students, the better to put them off any temptation to use such sites themselves.

The underlined text links to a site called Continue reading

Guest what?

Print Friendly, PDF & Email

Now here’s an oddity. My plagiarism news alert alerted me to 10 Interesting Facts about Online Essay Writing Services the other day. What I found interesting, even before I clicked on the link, was that the article was posted on the Royal Dutch Shell plc .com website. What interest did Shell, the multinational/ global oil company, have in online essay writing services?

I just had to find out.

It turns out that Royal Dutch Shell plc .com is a gripe site, someone with a grudge against Royal Dutch Shell. The Shell website is simply www.shell.com, not royaldutchshellplc.com.

The site was founded by and is maintained by John Donovan.  On his disclaimer page, he openly proclaims the nature of his grudge against Shell.

Donovan might have good cause for his grievance; he certainly seems to have grievance, be it justified or not.  His site is full of whistle-blowing articles pinpointing practices which may be of a dubious nature. The origins of his grievance are highlighted on his eponymous site, johndonovan.website (one of several he maintains):

And the puzzle: in among the many many articles accusing Shell of misdemeanours of many kinds is the article,  10 Interesting Facts about Online Essay Writing Services. It seems out of place. What’s more, the “10 interesting facts” article extols the supposed virtues of a good essay writing service. Donovan appears to be very much in favour of them.  The article claims that “trustworthy and effective” services provide Continue reading

Here’s a how-de-do

Print Friendly, PDF & Email

In a recent post, APA7 – not so sure…, I said that one of the things I like about the latest edition of the APA Publication Manual is that it standardises the recording of a DOI – to the form: https//doi.org/10.xxxxx.yyyy.  Previously there were several different ways of recording a DOI, including

doi:10.1098/rstb.2010.0321
http://dx.doi.org/10.1098/rstb.2010.0321
https://doi.org/10.1098/rstb.2010.0321

All three methods were accepted in APA style documents, with the caveat that the formats should not be mixed in any one reference list, authors should change the format of any DOIs if and as necessary to provide a consistent style in that paper.

The latest edition of APA advises a standard format, so this item would now be referenced only as https://doi.org/10.1098/rstb.2010.0321.

This standardisation is good, it reduces potential confusion.

But it’s not just online documents which have DOIs – print documents are often assigned DOIs as well. The APA-style reference for APA’s Publication manual is (according to my paperback edition of the style guide, p. iv):

American Psychological Association (2020). Publication manual of the American Psychological Association (7th ed.). https://doi.org/10.1037/0000165-000

Which may make for complications (especially for students in IB schools).

In an earlier post, Just a matter of time, I pointed to confusion between online material and material obtained online. Students (and teachers and others) are often confused in this regard; the title of Katie Greer and Shawn McCann’s article says it all: Everything Online is a Website: Information Format Confusion in Student Citation Behaviors.

IB adds to the confusion by requiring students to provide dates of access for electronic sources.

Now APA7 adds to the pot by requiring that DOIs be provided, using the https:// format, for print materials as well as for online materials:

Include a DOI for all works that have a DOI, regardless of whether you used the online version or the print version (APA7, p. 299).

Putting it all together, I’ve got a little list – of incompatible requirements. * 

  • Many referencing style guides (including APA) advise that date of access is needed only for online materials which are unstable, their contents or the URL might change or be changed.
  • The guides advise that materials with a DOI are regarded as stable so do not need a date of access.
  • APA7 requires that if a source has a DOI then it should be included in the reference.
  • APA7 requires that the DOI use the https:// protocol, thus
    https://doi.org/10.1037/0000165-000.
  • (As noted,) materials with a DOI are regarded as stable so do not need a date of access (in major referencing guides).
  • IB requires that references for electronic sources include the date of access.
  • IB examiners have been known to comment “Date of access?” on reference lists which include DOIs which do not have dates of access – marks may have been deducted for the omission.
  • It is unlikely that IB examiners will check whether a work in a reference list which carries a DOI is available in print; the DOI will have the https:// protocol and therefore look just like an online source.
  • IB examiners might therefore deduct marks for not including the date of access of a print work because they think it is an online source and therefore should have a date of access.

It’s a fine how-de-do, isn’t it, a pretty mess AND a state of things? *

Here are two suggestions for resolving the conundrum:

1) if referencing print materials with DOI for IB assessments, advise students not to give the DOI despite any advice to the contrary in the referencing guide.

OR

2) IB should instruct examiners that if a reference includes a DOI – including entries in the form https://doi.org/10.xxxxx.yyyy – then no date of access is required; to dispel confusion in schools, this advice could (and should) be added to IB guidance such as the page Acknowledging the ideas or work of another person—minimum requirements.

 

*  I seem to have Gilbert and Sullivan’s Mikado playing earworm, both “I’ve got a little list” and “Here’s a how-de-do” feature in the comic opera – which leads to the thought, if we are trying to “make the punishment fit the crime,” we must first be sure that a crime has been committed.

No dumb questions

Print Friendly, PDF & Email

Some of the questions asked in forums to which I subscribe are often basic and quickly answered, questions such as

  1. I’ve heard that the abstract is no longer required in Extended Essays. Is this true?
  2. Can students write an Extended Essay in their ab initio language?
  3. Should a Language B student write the RPPF in their own language or in the language of the essay?

Sometimes the writer knows that these are basic questions, prefacing the question with something like “Apologies if this is a stupid question…”

Those who do apologise should understand, there are no dumb questions. If you don’t know the answer and you need to find it, it’s a valid question.  If you have made the effort to find out but cannot find (or do not understand) the answer to your questions, then it may be that your search powers need boosting, it may be that you are looking in the wrong place/s, it could indicate a fault on the part of those who compile the guides or design the websites – but these questions are still valid and those who ask them still need answers.  Don’t apologise! (But see (4) below.)

I am very aware that, especially in the extended essay forums, supervisors may not have supervised a student under the current curriculum (which was introduced in 2016), their experience (if they have experience) was some years ago using an earlier and in some respects very different guide. There is no use saying, they should know by now; they have not had the opportunity to find out. Their questions are still valid.

[As an aside, I would add that I am sometimes struck that many forum users only use the forums when they have questions, they do not visit (or receive notifications by email) as a matter of course. That’s sad – and a missed opportunity.  I find the forums an invaluable and free source of continuing professional development. I do not read every post, far from it, but I do read threads that interest me and I occasionally bookmark a thread because I don’t know or am unsure and I want to see what others have to say on the topic.]

What often surprises me (I am being very careful with my words here) is the nature of the responses they get. While the answers given are most times correct, they do not always give provenance, they do not say where the original questioner can verify the response, in which document the answer can be found. On what page too, please, it’s often not helpful enough simply to say (as one recent respondent to a question did), “on the EE website.”   Not pinpointing the source strikes me as unhelpful, certainly not as helpful as it might be – especially if the question has been asked because of disagreement in the school and the questioner needs support from documentation to settle the argument.

This could also be important when, instead of a single right answer to the question, there might be different and equally valid answers. That often happens when it is not a matter of policy but of local practice, with those responding stating what happens in their own subjects or schools as if this was the only way to do it (whatever “it” is), without appreciating that other subjects or schools may do it differently and also be right.  When the source is not documented, those following the thread cannot verify the accuracy of those responses and may be confused. Or worse.

And of course, if the respondent gets it wrong, gives a wrong answer and misleads the questioner (and is not corrected), the consequences may indeed be worse.

What surprises me most of all, concerns me most of all, is that we expect documentation from our students. When they make statements or claims in their work (and especially in their extended essays) that are not common knowledge, they are expected to state their source/s – and will probably lose marks if they do not and in many cases may well be found to have committed plagiarism or other form of academic misconduct.

Please note, I am not suggesting that colleagues are committing plagiarism when they do not source their statements in the forums. These colleagues are not writing academic papers. But this just adds weight to one of my guiding principles, we do not just cite our sources in order to “avoid plagiarism” – we cite our sources to help our readers.  When we do not cite our sources, we are being less helpful than we might – we should – hope to be.

What’s more, we cite our sources to help ourselves. Even if we think we know the answer to a question, it is worth checking that we have it right – and having checked, to share the location in our response.

What source?

Not too far removed from these considerations is the nature of the source.  We teach our students CRAAP and other models for evaluating their sources, we promote lateral reading and other strategies for evaluation purposes, we demonstrate that Google hit #1 is often not to be relied on or may not provide a full answer, we implore them to go to the original source. We despair when our students ignore our advice and our warnings and fail to think critically about the information they find and they use.  Information is not all equal – but so often is treated as if it is.

And yet (here’s another gripe), on those occasions when sources are cited in the forums, whether by questioner or respondent, it is often not the guide or other official documentation which are cited. So many times the source is given as my colleague/s (or even my student), my coordinator, a workshop leader, a textbook, or “someone from IB” (who is more likely to be a workshop leader or field representative and not actually from IB) (not that everyone who works for IB is equally knowledgeable on all matters IB).

Occasionally, one even gets the impression that respondents know that the official guide and a textbook say different things – and they seem more inclined to believe the textbook than the official document.  But that’s a completely different matter. It remains, information is not all equal.

So, a plea: when responding to questions on forums, cite your source/s, cite authoritative source/s.   Our citations do not need to be perfect APA or Chicago or whatever. They need to be helpful. A direct link to the page will do, a path will do.  It’s helpful, it’s good practice. It gets to be a habit – which makes for good role-modelling as we work with our colleagues and with our students.

Let’s do it!

 

Footnotes

  1. Abstracts are no longer required in extended essays – and have not been since the introduction of the new curriculum in 2016 for first examination in May 2018. If included in an extended essay, they count towards the word count and – given that examiners stop reading after 4000 words – may mean that the examiner does not reach the conclusion of the essay, which could affect the marks awarded (What’s new in EE from 2016).
  2. It says specifically in the Language Ab Initio Guide (for first examination 2020, page 8) that students may NOT write an extended essay in their ab initio language.
  3. The RPPF must be written in the language of the essay. This is stated several times in the guide itself. It is also stated, in bold, on the RPPF itself. (Although the examiner will be fluent in the language of the essay, there is no guarantee that that examiner has any knowledge of the student’s own language, whatever that may be.)
  4. It would be good to think that those posing basic questions have made an effort to find an answer, in the guides and in other documentation or in the forum/s. Given the frequency with which same basic questions recur in the forums, one cannot help but wonder if the questioner made any effort to see if that question has been asked before. In many cases, I doubt it, given the frequency of the same, frequently asked questions.
    Nevertheless, there are no dumb questions.