The U.S. Senate confirmed former Harvard Law School (HLS) dean Elena Kagan today (Aug. 5) as the fourth woman to serve on the Supreme Court. In a series of interviews, Kagan’s former Harvard colleagues, former teachers, and friends lauded the choice, provided insights into the woman they’ve known for decades, and explained why they believe she will be a standout jurist.The onetime colleagues agreed that Kagan has a razor-sharp mind, which they said is matched by a bright sense of humor, graciousness, thoughtfulness, and patience, along with a love of sports. They described her as a fine conversationalist and storyteller with a quick wit, to the point where a seat next to Kagan at a dinner party is a coveted spot.HLS Dean Martha Minow, who first knew Kagan when the new judge was a student, called her “incredibly impressive even back then.” Minow, who is also the Jeremiah Smith Jr. Professor of Law, said that when Kagan eventually became dean, “She judiciously used humor to lighten potentially tense moments in faculty meetings, and has a real knack for drawing people out in conversation. She is unpretentious and warm. At heart, she is a problem solver, and she can’t help getting to the heart of any problem through her questions.”A New York City native, Kagan earned an undergraduate degree in history from Princeton University in 1981. In 1983 she received a master’s in philosophy from Oxford University. She attended HLS and graduated in 1986 with honors. She clerked for Judge Abner Mikva of the U.S. Court of Appeals for the D.C. Circuit, and for U.S. Supreme Court Justice Thurgood Marshall. From 1991 to 1995, Kagan taught at the University of Chicago Law School. From 1995 to 1999, she served the Clinton administration as associate counsel, deputy assistant to the president for domestic policy, and deputy director of the Domestic Policy Council.Kagan returned to HLS as a visiting law professor from 1999 to 2001. In 2001 she became a professor of law, and in 2003 was named the Charles Hamilton Houston Professor of Law. Kagan was HLS dean of the Faculty of Law from 2003 to 2009. In 2009, she was confirmed as U.S. solicitor general. The Senate approved her nomination to the Supreme Court by a 63 to 37 vote.Known as a powerful dean, Kagan increased financial aid for law students entering public service after graduation, hired an array of prominent faculty members, and excelled at fundraising for the School. She also took steps to improve student life, bringing a volleyball court and skating rink to campus, and providing free coffee.Dean Kagan also “very vigorously pushed to diversify the Law School student body even more, by recruiting some of the best and brightest students of color from around the country and around the world,” said Charles Ogletree, the Jesse Climenko Professor of Law.Her former colleagues gave Kagan high marks for her intellect, analytical skills, and readiness for the top court. They said Kagan, an authority on administrative and constitutional law, has the ability to absorb and process what most people would consider an overwhelming amount of information, and then sift through it to create a nuanced analysis.Carol Steiker, the Howard J. and Katherine W. Aibel Professor of Law and a longtime friend of Kagan, said her colleague was thoughtful and was quick to thank those who helped her or the School, sending them kind, personalized notes. She often surprised colleagues with considerate gifts.“She is very curious about the world and about other people … she is also very funnyand very down-to-earth,” said Steiker.Steiker first met Kagan under somewhat adversarial circumstances. As students, both were vying for the top spot at the Harvard Law Review and ended up in a runoff, one that Steiker won. Unsuccessful candidates in such tightly contested elections sometimes got “sulky, undermining, and retaliatory,” Steiker said. But not Kagan.Steiker said the runoff “was competitive without it at all being ugly. It says a lot about Elena … [she] does not lose very often in her life.”Kagan went on to take the second-in-command slot at the journal, and the two women quickly grew close, toiling tirelessly through the summer to finish leftover publication work so they could launch their first issue on time that fall.“We were very good friends and colleagues in Law School,” Steiker added.Later, when both clerked in Washington for Marshall, Steiker said that when she and some colleagues would take a break by working out to a Jane Fonda exercise tape, Kagan would play basketball on the top floor of the Supreme Court Building, on the aptly named “Highest Court in the Land.”“Don’t let the term ‘Shorty’ fool you,” said Ogletree of the nickname given to Kagan by Marshall. “She has a devastating layup and jump shot.”Ogletree said that when Kagan came to HLS he quickly learned not to play basketball with her for fear she would pull a tricky move and he would “end up getting called with a flagrant foul.”Echoing Ogletree, Steiker stayed away from another game involving Kagan: poker. “She used to play poker, and I quickly learned not to do that because she is a good poker player,” Steiker said.Kagan’s insight and intellect were evident early to a young HLS professor who had her as a student. Richard Fallon, the Ralph S. Tyler Jr. Professor of Constitutional Law, was in his third year teaching “The Federal Courts and the Federal System,” a class involving dense, intricate rules and technical details.“I can remember seeing Elena’s hand go up and feeling my heart skip a beat, and a little bit of wobble in my knees, because I knew that what she was going to say was going to be incisive, but it was incisive not in the ‘Hey professor, I’ve got a neat thing to throw into the mix [kind of way],’ but some probing question. … I was still in the stage where students were hitting me with things I hadn’t thought about before, and Elena hit me with as many intelligent, cogent points that I had never considered before as any student that I have ever had.“From the beginning of the time that I knew her, I have thought that she had one of the most powerful, analytical intelligences of anybody I have ever met.”Eventually, Fallon served on a number of academic appointment committees with Kagan when she was dean. He said she had an impressive ability to grasp not only the academic work of a faculty candidate, but also to absorb the comments made by each committee member before passing her own judgment.Though he was quick to add that assessing the scholarship or strengths of a potential hire is not the same as deciding legal cases, Fallon said Kagan brought a “thoughtfulness and judiciousness” to the deliberative process.She had an ability “not to commit herself until she had heard all the arguments, until she had heard everything that anybody else had to say, and then a capacity to take in everything that other people had had to say, respond to it in a thoughtful way, and make up her mind decisively.”Ogletree, who thinks Kagan will be a moderate justice who will “work within the constraints of the law,” said her intellectual curiosity is one of her key qualifications for the job.“There is no area where she won’t have an intellectual curiosity, and a willingness to dive in and learn more about the role the Constitution plays in so many areas.”Steiker recalled that Kagan, when she worked at the Law Review, was “an extraordinary editor, just a brilliant mind, who could map out an idea like some amazing cartographer. Her mind could see the whole thing, the whole shape of a complicated piece of legal scholarship or set of arguments.”“One of my colleagues once described another colleague as having a mind like a bell,” Steiker said, “and I always thought of that description as being very apt for Elena, just the clarity of her thinking, even about issues that are extraordinarily complex. It’s really striking.”
Loading… President of Nigeria Football Federation, Amaju Pinnick has disclosed that the body will be making a lot of difficult decisions regarding some of its activities in the coming days.Advertisement super eagles no movement on fifa rankingPinnick revealed that part of the decisions to be taken include few provisions on the Gernot Rohr’s new contract, the bonuses, and allowances of the Super Eagles players.The former Delta State football Chairman who maintained that Franco-German coach will now be paid in naira, however, insisted that the players are not exempted from the new arrangement as they will also get the same treatment whenever they are playing on the home soil.“When they play in Nigeria, their bonuses have been in naira. When they play abroad, we pay them in dollars but that can even change,” he said.The NFF boss explained further that the Federation will be taking such steps, in order to back the government on the new fiscal policy.“As I said, we are going to make many difficult decisions that will go across board because we need to promote our fiscal policy.read also:Oliseh: Why I can never consider Super Eagles job again“If the government is saying that we need to strengthen our naira, we should not be paying in dollars. We are an integral part of the government,” he concluded.FacebookTwitterWhatsAppEmail分享 Promoted ContentCan Playing Too Many Video Games Hurt Your Body?6 Ridiculous Health Myths That Are Actually TrueA Guy Turns Gray Walls And Simple Bricks Into Works Of ArtCouples Who Celebrated Their Union In A Unique, Unforgettable Way7 Truly Incredible Facts About Black HolesThe Best Cars Of All TimeTop 7 Best Car Manufacturers Of All TimeWhat Is A Black Hole In Simple Terms?A Hurricane Can Be As Powerful As 10 Atomic BombsBest & Worst Celebrity Endorsed Games Ever Made8 Weird Facts About Coffee That Will Surprise You10 Hyper-Realistic 3D Street Art By Odeith
The Sagicor Sigma Corporate Run, set for Sunday, February 21, is ready to raise $50 million to contribute to three worthy causes. Sagicor Group raised a record $26 million in 2015 and the aim for the 18th staging this year is to double the target of $25 million set last year. This year’s beneficiaries are Children with Cancer across the island, as well as the Jamaica Cancer Society and the Black River Hospital Paediatric Unit in St Elizabeth. President and CEO of the Sagicor Group, Richard Byles, challenged Jamaicans to donate and pledge funds towards Sigma Run 2016. “Over the years, we have raised $166 million mainly for children and institutions,” Byles disclosed at the media launch yesterday at The Jamaica Pegasus hotel in New Kingston. “Last year, we raised the most money in one year of $26 million. This year, we hope to raise $50 million. We have some special contribution cards in order to reach the target,” Byles added. “This year, we are focusing on cancer. We could not want a more worthy cause,” he disclosed. “We plan to give most of the funds to the Jamaica Cancer Society. We also want to help the children with cancer and the Black River Hospital,” Byles said. This year’s patrons for the event are track and field stars Asafa Powell and Novlene Williams-Mills and actress Sheryl Lee Ralph. Cancer survivor Williams-Mills, a relay gold medallist at last year’s IAAF World Championships in Beijing, said she could not turn down the opportunity offered by Sagicor. “It’s an honour and pleasure to be a part of this year’s Sigma Run. I’m a proud breast cancer survivor going back to 2012 when cancer became a part of my life,” an emotional Williams-Mills stated. “I know what it is for a family with a breast cancer survivor. It is a burden. I have to look in the mirror and see the many scars on my body. It is hard to deal with this thing called cancer, it will rip you apart,” she said. “Cancer has been a part of my family, as my sister died from ovarian cancer and my mother is also a cancer survivor. I ask for support for other survivors as they need help,” Williams-Mills added. Sheryl Lee Ralph revealed that cancer took away her father. “In 2013, prostate cancer took my dad, he fought every minute of the disease. I’m very happy to be partnering with Sagicor to help educate others about the disease. Health really matters,” the actress said. DONATION TO HELP CHILDREN
Ian Paisley – Pic courtesy of BBCIAN Paisley has spoken of how he came to trust former British Prime Minister Tony Blair – because he was descended from Ulster Protestants.Dr Paisley was speaking during a BBC NI documentary broadcast on Monday night which has led to bitter recriminations with the former First Minister saying he was stabbed in the back by both his church and the DUP – for going into government with Martin McGuinnessJournalist Eamonn Mallie asked Dr Paisley about his relationship with Tony Blair. “We were both Ulster Protestants,” said Paisley.“His grandfather was a Protestant from Donegal and a leader there in the Orange Order. He understood where I was coming from.”He said he was aware that Blair’s grandmother had warned him never to marry a Catholic; and Blair had confirmed this.But when Blair told him that he was converting to Catholicism, Paisley said he told him: “You fool.” Mr Paisley also recalled the plot to unseat him as DUP leader. He has alleged there was a meeting with Peter Robinson, Nigel Dodds Mr Dodds, party whip Lord Morrow and his special advisor Tim Johnston.“Nigel Dodds said to me I want you to be gone by Friday,” said Paisley“I just more or less smirked and Peter said ‘no, no, no he needs to stay in for another couple of months’.”Eileen Paisley said her husband was “assassinated with words and deeds”, treated shamefully and was left with no option but to stand down.She described Nigel Dodds as a “cheeky sod”. “I detected a nasty spirit arising from some of the other MPs and the way they spoke to Ian,” Mrs Paisley said.“I was very annoyed one day with the way some of them spoke to him and addressed him.“Whenever they said to him about what was going on and he said to them ‘well, that’s what should be done’ and they said ‘och doc’, you know? Sort of, ‘don’t be so stupid’.“That sort of set the alarm bells ringing in my head,” she added. Mrs Paisley also said attempts to paint her son Ian Paisley Jnr as sleazy because of a planning inquiry ultimately were found not to be true.“All the sleaze was in his own house,” Mr Paisley said in reference to Peter Robinson, in a swipe at his wife’s notorious affair.PAISLEY: ‘I TRUST TONY BLAIR BECAUSE OF HIS PROTESTANT ORANGE ORDER DONEGAL GRANDAD’ was last modified: January 21st, 2014 by John2Share this:Click to share on Facebook (Opens in new window)Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Reddit (Opens in new window)Click to share on Pocket (Opens in new window)Click to share on Telegram (Opens in new window)Click to share on WhatsApp (Opens in new window)Click to share on Skype (Opens in new window)Click to print (Opens in new window)Tags:donegalIan PaisleyOrange OrderTony Blair
“We would never consciously undermine our own efforts over the past ten years.” From the eNCA apology (Image: eNCA) • Pistorius trial: open justice or trial by media? • A media guide to the Oscar Pistorius trial •The media and open justice • South Africa’s justice system • Laureus honour for Blade RunnerSulaiman PhilipWinning the right to broadcast the Oscar Pistorius trial came with clearly defined restrictions, one of which prevented the media from showing the faces or publishing photographs of witnesses who had not consented to being filmed.Judge Dunstan Mlambo’s ruling was hailed as a balancing act between press freedom and individuals’ rights by some, and as censorship by others.On the second day of the trial that ruling was put to the test. Patrick Conroy, head of news at eNCA, had checked with the court clerk for permission to use a photo of witness Michelle Burger that had appeared in two Afrikaans newspapers. The argument Conroy and eNCA put forward was that showing a picture of Burger with the caption – “On the stand: Michelle Burger, Pistorius’s neighbour” – to accompany the audio feed as she testified was not a violation of the judge’s order.But a fuming Gerrie Nel, the prosecutor, reminded Conroy that the state interpreted the ruling to mean that any image of a witness, no matter the source, would breach the spirit and intent of the earlier ruling. Nel told the UK’s Daily Mail newspaper that eNCA originally wanted to use a photograph of Burger taken outside the court. “The court said no. They still went ahead and did it using a photo they found somewhere else.”Attorney Pamela Stein does not read the ruling as narrowly. A media specialist and partner at the firm Webber Wentzel, as well as a co-author of the newly released Practical Guide to Media Law Handbook, says: “If I were advising eNCA I would have told them to go ahead and publish the photo. The picture was not taken while the witness was on the stand. The court’s control extends only as far as the door of the court.”Confusion around the interpretation of Judge Mlambo’s ruling comes from the wording used – no images of witnesses if they did not provide permission. The newspapers, Beeld and Die Burger, and eNCA argued that his ruling forbade images taken inside the court while testimony was being given. This is a longstanding concession between the media and the justice system.Conroy argued that the legal advice the news group got was in line with this understanding, before conceding in an apology on the channel’s website: “But, on reflection, this was a bad judgement call on our part and we accept that it did not accord with the spirit of the court order.”Trial judge Thokozile Masipa strengthened the ruling by saying any image of a witness who did not want their face shown, was now off limits. She went on to warn the media: “If you do not behave, you will not be treated with soft gloves.”In an editorial, South Africa’s The Times newspaper said: “At the heart of yesterday’s controversy was the weakness of the Mlambo judgment. The judge shied away from either opening the courtroom to broadcasters or keeping them out altogether. By choosing a middle route, he has opened the way for confusion and, as occurred yesterday, unwise rulings that threaten media freedoms and extend the procedural authority of judges beyond courtrooms and on to the streets.”This is not the first time that photographs have caused an uproar in the matter. A year ago, crime scene photos from Pistorius’s Silver Lakes home were leaked. At the time, the original investigating officer told the English newspaper Sunday People that he knew of police officers who were being offered large sums of money for photographs taking in the house.Even Blade Nzimande, the general secretary of the South African Communist Party, waded into the controversy. He wrote in Umsebenzi, the SACP magazine: “Even worse, the sentiment coming across is that it is Pistorius’s rights that have been violated and not those of the Steenkamp family and of Reeva, whose blood is literally splashed in that footage! Sanef [South African National Editors’ Forum] is dead silent on these matters. And it is also the rights of a man that are elevated above those of a woman. In fact, this patriarchal and elitist message has come to characterise the voluminous media coverage of this matter, especially by eNCA on 4 June 2013 and before that!”The intent of eNCA at the beginning of the trial, in the words of journalist Karen Maughan, was to give clear-headed insights into the workings of the South African judiciary. The channel’s top legal reporter wrote: “coverage so far has been tainted by inaccuracy and sensation. The good and the bad of our justice system in South Africa will be on display. We will cover this trial honestly, calmly and fairly.”With 80 accredited journalist filling the courtroom and the overflow area, and another 200 filling a room outside the court, all looking for exclusive content, it was inevitable that the line of what was permissible was going to be tested.As blogger Akanyang Africa wrote in his blog: “Of course I know that this [Judge Mlambo’s restrictions] would have been seen by many as being the worst censorship in as far as press freedom is concerned. But rights have limitations too and by putting this condition in place, Judge Mlambo would have exercised and limited that right correctly.”The law is fluid, a living thing, especially in a democracy as young as South Africa. There will be a continuous give and take as the citizenry and the government and its institutions find a comfortable space to co-exist. The scrutiny given to this trial is proving to be the perfect vehicle for the media and the justice system to redefine the margins of what is, and what is not permissible.
GANs are neural networks used in unsupervised learning that generate synthetic data given certain input data. GAN’s have two components: a generator and a discriminator. A generator generates new instances of an object and the discriminator determines whether the new instance belongs to the actual dataset. A generative learn how the data is generated i.e. the structure of the data, in order to categorize it. This allows the system to generate samples with similar statistical properties. Discriminative models will learn the relation between the data and the label associated with the data. The discriminative model will categorize the input data without knowing how the data is generated. GAN exploits the concept behind both the models to get a better network architecture. This tutorial on GAN’s will help you build a neural network that fills in the missing part of a handwritten digit. This tutorial will cover how to build an MNIST digit classifier and simulate a dataset of handwritten digits with sections of the handwritten numbers missing. Next, users will learn using the MNIST classifier to predict on noised/masked MNIST digits dataset (simulated dataset) and implement GAN to generate back the missing regions of the digit. This tutorial will also cover using the MNIST classifier to predict on the generated digits from GAN and finally compare performance between masked data and generated data. This tutorial is an excerpt from a book written by Matthew Lamons, Rahul Kumar, Abhishek Nagaraja titled Python Deep Learning Projects. This book will help users develop their own deep learning systems in a straightforward way and in an efficient way. The book has projects developed using complex deep learning projects in the field of computational linguistics and computer vision to help users master the subject. All of the Python files and Jupyter Notebook files for this tutorial can be found at GitHub. In this tutorial, we will be using the Keras deep learning library. Importing all of the dependencies We will be using numpy, matplotlib, keras, tensorflow, and the tqdm package in this exercise. Here, TensorFlow is used as the backend for Keras. You can install these packages with pip. For the MNIST data, we will be using the dataset available in the keras module with a simple import: import numpy as npimport randomimport matplotlib.pyplot as plt%matplotlib inline from tqdm import tqdmfrom keras.layers import Input, Conv2Dfrom keras.layers import AveragePooling2D, BatchNormalizationfrom keras.layers import UpSampling2D, Flatten, Activationfrom keras.models import Model, Sequentialfrom keras.layers.core import Dense, Dropoutfrom keras.layers.advanced_activations import LeakyReLUfrom keras.optimizers import Adamfrom keras import backend as kfrom keras.datasets import mnist It is important that you set seed for reproducibility: # set seed for reproducibilityseed_val = 9000np.random.seed(seed_val)random.seed(seed_val) Exploring the data We will load the MNIST data into our session from the keras module with mnist.load_data(). After doing so, we will print the shape and the size of the dataset, as well as the number of classes and unique labels in the dataset: (X_train, y_train), (X_test, y_test) = mnist.load_data() print(‘Size of the training_set: ‘, X_train.shape)print(‘Size of the test_set: ‘, X_test.shape)print(‘Shape of each image: ‘, X_train.shape)print(‘Total number of classes: ‘, len(np.unique(y_train)))print(‘Unique class labels: ‘, np.unique(y_train)) We have a dataset with 10 different classes and 60,000 images, with each image having a shape of 28*28 and each class having 6,000 images. Let’s plot and see what the handwritten images look like: # Plot of 9 random imagesfor i in range(0, 9): plt.subplot(331+i) # plot of 3 rows and 3 columns plt.axis(‘off’) # turn off axis plt.imshow(X_train[i], cmap=’gray’) # gray scale The output is as follows: Let’s plot a handwritten digit from each class: # plotting image from each classfig=plt.figure(figsize=(8, 4))columns = 5rows = 2for i in range(0, rows*columns): fig.add_subplot(rows, columns, i+1) plt.title(str(i)) # label plt.axis(‘off’) # turn off axis plt.imshow(X_train[np.where(y_train==i)], cmap=’gray’) # gray scaleplt.show() The output is as follows: Look at the maximum and the minimum pixel value in the dataset: print(‘Maximum pixel value in the training_set: ‘, np.max(X_train))print(‘Minimum pixel value in the training_set: ‘, np.min(X_train)) The output is as follows: Preparing the data Type conversion, centering, scaling, and reshaping are some of the pre-processing we will implement in this tutorial. Type conversion, centering and scaling Set the type to np.float32. For centering, we subtract the dataset by 127.5. The values in the dataset will now range between -127.5 to 127.5. For scaling, we divide the centered dataset by half of the maximum pixel value in the dataset, that is, 255/2. This will result in a dataset with values ranging between -1 and 1: # Converting integer values to float types X_train = X_train.astype(np.float32)X_test = X_test.astype(np.float32) # Scaling and centeringX_train = (X_train – 127.5) / 127.5X_test = (X_test – 127.5)/ 127.5print(‘Maximum pixel value in the training_set after Centering and Scaling: ‘, np.max(X_train))print(‘Minimum pixel value in the training_set after Centering and Scaling: ‘, np.min(X_train)) Let’s define a function to rescale the pixel values of the scaled image to range between 0 and 255: # Rescale the pixel values (0 and 255)def upscale(image): return (image*127.5 + 127.5).astype(np.uint8) # Lets see if this worksz = upscale(X_train)print(‘Maximum pixel value after upscaling scaled image: ‘,np.max(z))print(‘Maximum pixel value after upscaling scaled image: ‘,np.min(z)) A plot of 9 centered and scaled images after upscaling: for i in range(0, 9): plt.subplot(331+i) # plot of 3 rows and 3 columns plt.axis(‘off’) # turn off axis plt.imshow(upscale(X_train[i]), cmap=’gray’) # gray scale The output is as follows: Masking/inserting noise For the needs of this project, we need to simulate a dataset of incomplete digits. So, let’s write a function to mask small regions in the original image to form the noised dataset. The idea is to mask an 8*8 region of the image with the top-left corner of the mask falling between the 9th and 13th pixel (between index 8 and 12) along both the x and y axis of the image. This is to make sure that we are always masking around the center part of the image: def noising(image): array = np.array(image) i = random.choice(range(8,12)) # x coordinate for the top left corner of the mask j = random.choice(range(8,12)) # y coordinate for the top left corner of the mask array[i:i+8, j:j+8]=-1.0 # setting the pixels in the masked region to -1 return array noised_train_data = np.array([*map(noising, X_train)])noised_test_data = np.array([*map(noising, X_test)])print(‘Noised train data Shape/Dimension : ‘, noised_train_data.shape)print(‘Noised test data Shape/Dimension : ‘, noised_train_data.shape) A plot of 9 scaled noised images after upscaling: # Plot of 9 scaled noised images after upscalingfor i in range(0, 9): plt.subplot(331+i) # plot of 3 rows and 3 columns plt.axis(‘off’) # turn off axis plt.imshow(upscale(noised_train_data[i]), cmap=’gray’) # gray scale The output is as follows: Reshaping Reshape the original dataset and the noised dataset to a shape of 60000*28*28*1. This is important since the 2D convolutions expect to receive images of a shape of 28*28*1: # Reshaping the training dataX_train = X_train.reshape(X_train.shape, X_train.shape, X_train.shape, 1)print(‘Size/Shape of the original training set: ‘, X_train.shape) # Reshaping the noised training datanoised_train_data = noised_train_data.reshape(noised_train_data.shape,noised_train_data.shape,noised_train_data.shape, 1)print(‘Size/Shape of the noised training set: ‘, noised_train_data.shape)# Reshaping the testing dataX_test = X_test.reshape(X_test.shape, X_test.shape, X_test.shape, 1)print(‘Size/Shape of the original test set: ‘, X_test.shape)# Reshaping the noised testing datanoised_test_data = noised_test_data.reshape(noised_test_data.shape,noised_test_data.shape,noised_test_data.shape, 1)print(‘Size/Shape of the noised test set: ‘, noised_test_data.shape) MNIST classifier To start off with modeling, let’s build a simple convolutional neural network (CNN) digit classifier. The first layer is a convolution layer that has 32 filters of a shape of 3*3, with relu activation and Dropout as the regularizer. The second layer is a convolution layer that has 64 filters of a shape of 3*3, with relu activation and Dropout as the regularizer. The third layer is a convolution layer that has 128 filters of a shape of 3*3, with relu activation and Dropout as the regularizer, which is finally flattened. The fourth layer is a Dense layer of 1024 neurons with relu activation. The final layer is a Dense layer with 10 neurons corresponding to the 10 classes in the MNIST dataset, and the activation used here is softmax, batch_size is set to 128, the optimizer used is adam, and validation_split is set to 0.2. This means that 20% of the training set will be used as the validation set: # input image shapeinput_shape = (28,28,1) def train_mnist(input_shape, X_train, y_train):model = Sequential()model.add(Conv2D(32, (3, 3), strides=2, padding=’same’,input_shape=input_shape))model.add(Activation(‘relu’))model.add(Dropout(0.2))model.add(Conv2D(64, (3, 3), strides=2, padding=’same’))model.add(Activation(‘relu’))model.add(Dropout(0.2))model.add(Conv2D(128, (3, 3), padding=’same’))model.add(Activation(‘relu’))model.add(Dropout(0.2))model.add(Flatten())model.add(Dense(1024, activation = ‘relu’))model.add(Dense(10, activation=’softmax’))model.compile(loss = ‘sparse_categorical_crossentropy’,optimizer = ‘adam’, metrics = [‘accuracy’])model.fit(X_train, y_train, batch_size = 128, epochs = 3, validation_split=0.2, verbose = 1 )return modelmnist_model = train_mnist(input_shape, X_train, y_train) The output is as follows: Use the built CNN digit classifier on the masked images to get a measure of its performance on digits that are missing small sections: # prediction on the masked imagespred_labels = mnist_model.predict_classes(noised_test_data)print(‘The model model accuracy on the masked images is:’,np.mean(pred_labels==y_test)*100) On the masked images, the CNN digit classifier is 74.9% accurate. It might be slightly different when you run it, but it will still be very close. Defining hyperparameters for GAN The following are some of the hyperparameters defined that we will be using throughout the code and are totally configurable: # Smoothing valuesmooth_real = 0.9 # Number of epochsepochs = 5# Batchsizebatch_size = 128# Optimizer for the generatoroptimizer_g = Adam(lr=0.0002, beta_1=0.5)# Optimizer for the discriminatoroptimizer_d = Adam(lr=0.0004, beta_1=0.5)# Shape of the input imageinput_shape = (28,28,1) Building the GAN model components With the idea that the final GAN model will be able to fill in the part of the image that is missing (masked), let’s define the generator. You can understand how to define the generator, discriminator, and DCGAN by referring to our book. Training GAN We’ve built the components of the GAN. Let’s train the model in the next steps! Plotting the training – part 1 During each epoch, the following function plots 9 generated images. For comparison, it will also plot the corresponding 9 original target images and 9 noised input images. We need to use the upscale function we’ve defined when plotting to make sure the images are scaled to range between 0 and 255, so that you do not encounter issues when plotting: def generated_images_plot(original, noised_data, generator): print(‘NOISED’)for i in range(9):plt.subplot(331 + i)plt.axis(‘off’)plt.imshow(upscale(np.squeeze(noised_data[i])), cmap=’gray’) # upscale for plottingplt.show()print(‘GENERATED’)for i in range(9):pred = generator.predict(noised_data[i:i+1], verbose=0)plt.subplot(331 + i)plt.axis(‘off’)plt.imshow(upscale(np.squeeze(pred)), cmap=’gray’) # upscale to avoid plotting errorsplt.show()print(‘ORIGINAL’)for i in range(9):plt.subplot(331 + i)plt.axis(‘off’)plt.imshow(upscale(np.squeeze(original[i])), cmap=’gray’) # upscale for plottingplt.show() The output of this function is as follows: Plotting the training – part 2 Let’s define another function that plots the images generated during each epoch. To reflect the difference, we will also include the original and the masked/noised images in the plot. The top row contains the original images, the middle row contains the masked images, and the bottom row contains the generated images. The plot has 12 rows with the sequence, row 1 – original, row 2 – masked, row3 – generated, row 4 – original, row5 – masked,…, row 12 – generated. Let’s take a look at the code for the same: def plot_generated_images_combined(original, noised_data, generator): rows, cols = 4, 12 num = rows * cols image_size = 28 generated_images = generator.predict(noised_data[0:num])imgs = np.concatenate([original[0:num], noised_data[0:num], generated_images])imgs = imgs.reshape((rows * 3, cols, image_size, image_size))imgs = np.vstack(np.split(imgs, rows, axis=1))imgs = imgs.reshape((rows * 3, -1, image_size, image_size))imgs = np.vstack([np.hstack(i) for i in imgs])imgs = upscale(imgs)plt.figure(figsize=(8,16))plt.axis(‘off’)plt.title(‘Original Images: top rows, ”Corrupted Input: middle rows, ”Generated Images: bottom rows’)plt.imshow(imgs, cmap=’gray’)plt.show() The output is as follows: Training loop Now we are at the most important part of the code; the part where all of the functions we previously defined will be used. The following are the steps: Load the generator by calling the img_generator() function. Load the discriminator by calling the img_discriminator() function and compile it with the binary cross-entropy loss and optimizer as optimizer_d, which we have defined under the hyperparameters section. Feed the generator and the discriminator to the dcgan() function and compile it with the binary cross-entropy loss and optimizer as optimizer_g, which we have defined under the hyperparameters section. Create a new batch of original images and masked images. Generate new fake images by feeding the batch of masked images to the generator. Concatenate the original and generated images so that the first 128 images are all original and the next 128 images are all fake. It is important that you do not shuffle the data here, otherwise it will be hard to train. Label the generated images as 0 and original images as 0.9 instead of 1. This is one-sided label smoothing on the original images. The reason for using label smoothing is to make the network resilient to adversarial examples. It’s called one-sided because we are smoothing labels only for the real images. Set discriminator.trainable to True to enable training of the discriminator and feed this set of 256 images and their corresponding labels to the discriminator for classification. Now, set discriminator.trainable to False and feed a new batch of 128 masked images labeled as 1 to the GAN (DCGAN) for classification. It is important to set discriminator.trainable to False to make sure the discriminator is not getting trained while training the generator. Repeat steps 4 through 7 for the desired number of epochs. We have placed the plot_generated_images_combined() function and the generated_images_plot() function to get a plot generated by both functions after the first iteration in the first epoch and after the end of each epoch. Feel free to place these plot functions according to the frequency of plots you need displayed: def train(X_train, noised_train_data, input_shape, smooth_real, epochs, batch_size, optimizer_g, optimizer_d): # define two empty lists to store the discriminator # and the generator lossesdiscriminator_losses = generator_losses = # Number of iteration possible with batches of size 128iterations = X_train.shape // batch_size# Load the generator and the discriminatorgenerator = img_generator(input_shape)discriminator = img_discriminator(input_shape)# Compile the discriminator with binary_crossentropy lossdiscriminator.compile(loss=’binary_crossentropy’,optimizer=optimizer_d)# Feed the generator and the discriminator to the function dcgan # to form the DCGAN architecturegan = dcgan(discriminator, generator, input_shape)# Compile the DCGAN with binary_crossentropy lossgan.compile(loss=’binary_crossentropy’, optimizer=optimizer_g)for i in range(epochs):print (‘Epoch %d’ % (i+1))# Use tqdm to get an estimate of time remainingfor j in tqdm(range(1, iterations+1)):# batch of original images (batch = batchsize)original = X_train[np.random.randint(0, X_train.shape, size=batch_size)]# batch of noised images (batch = batchsize)noise = noised_train_data[np.random.randint(0, noised_train_data.shape, size=batch_size)]# Generate fake imagesgenerated_images = generator.predict(noise)# Labels for generated datadis_lab = np.zeros(2*batch_size)# data for discriminatordis_train = np.concatenate([original, generated_images])# label smoothing for original imagesdis_lab[:batch_size] = smooth_real# Train discriminator on original imagesdiscriminator.trainable = Truediscriminator_loss = discriminator.train_on_batch(dis_train, dis_lab)# save the losses discriminator_losses.append(discriminator_loss)# Train generatorgen_lab = np.ones(batch_size)discriminator.trainable = Falsesample_indices = np.random.randint(0, X_train.shape, size=batch_size)original = X_train[sample_indices]noise = noised_train_data[sample_indices]generator_loss = gan.train_on_batch(noise, gen_lab)# save the lossesgenerator_losses.append(generator_loss)if i == 0 and j == 1:print(‘Iteration – %d’, j)generated_images_plot(original, noise, generator)plot_generated_images_combined(original, noise, generator)print(“Discriminator Loss: “, discriminator_loss,\”, Adversarial Loss: “, generator_loss)# training plot 1generated_images_plot(original, noise, generator)# training plot 2plot_generated_images_combined(original, noise, generator)# plot the training lossesplt.figure()plt.plot(range(len(discriminator_losses)), discriminator_losses,color=’red’, label=’Discriminator loss’)plt.plot(range(len(generator_losses)), generator_losses,color=’blue’, label=’Adversarial loss’)plt.title(‘Discriminator and Adversarial loss’)plt.xlabel(‘Iterations’)plt.ylabel(‘Loss (Adversarial/Discriminator)’)plt.legend()plt.show()return generatorgenerator = train(X_train, noised_train_data,input_shape, smooth_real,epochs, batch_size,optimizer_g, optimizer_d) The output is as follows: Generated images plotted with training plots at the end of the first iteration of epoch 1 Generated images plotted with training plots at the end of epoch 2 Generated images plotted with training plots at the end of epoch 5 Plot of the discriminator and adversarial loss during training Predictions CNN classifier predictions on the noised and generated images We will call the generator on the masked MNIST test data to generate images, that is, fill in the missing part of the digits: # restore missing parts of the digit with the generatorgen_imgs_test = generator.predict(noised_test_data) Then, we will pass the generated MNIST digits to the digit classifier we have modeled already: # predict on the restored/generated digitsgen_pred_lab = mnist_model.predict_classes(gen_imgs_test)print(‘The model model accuracy on the generated images is:’,np.mean(gen_pred_lab==y_test)*100) The MNIST CNN classifier is 87.82% accurate on the generated data. The following is a plot showing 10 generated images by the generator, the actual label of the generated image, and the label predicted by the digit classifier after processing the generated image: # plot of 10 generated images and their predicted labelfig=plt.figure(figsize=(8, 4))plt.title(‘Generated Images’)plt.axis(‘off’) columns = 5rows = 2for i in range(0, rows*columns): fig.add_subplot(rows, columns, i+1) plt.title(‘Act: %d, Pred: %d’%(gen_pred_lab[i],y_test[i])) # label plt.axis(‘off’) # turn off axis plt.imshow(upscale(np.squeeze(gen_imgs_test[i])), cmap=’gray’) # gray scaleplt.show() The output is as follows: The Jupyter Notebook code files for the preceding DCGAN MNIST inpainting can be found at GitHub. Use the Jupyter Notebook code files for the DCGAN Fashion MNIST inpainting can be found. Summary We built a deep convolution GAN in Keras on handwritten MNIST digits and understood the function of the generator and the discriminator component of the GAN. We defined key hyperparameters, as well as, in some places, reasoned with why we used what we did. Finally, we tested the GAN’s performance on unseen data and determined that we succeeded in achieving our goals. To understand insightful projects to master deep learning and neural network architectures using Python and Keras, check out this book Python Deep Learning Projects. Read Next Getting started with Web Scraping using Python [Tutorial] Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs Google releases Magenta studio beta, an open source python machine learning library for music artists
Last year in November, Magic Leap introduced an Independent Creator Program. Yesterday, they named their selections for this program. The Magic Leap team reviewed over 6,500 entries, and selected projects in a wide range of categories, including education, entertainment, gaming, enterprise and more. Magic Leap Independent Creator Program is a development fund to help individual developers and teams to kick-start their Magic Leap One projects. They are offering grants between $20,000 and $500,000 per project along with the developer, hardware, and marketing support. The teams selected include: Source: MagicLeap The selected teams will now be paired with Magic Leap’s Developer Relations team for guidance and support. Once the teams have built, submitted, and launched their projects, the best experiences will be showcased at L.E.A.P. Conference in 2019. Teams will receive dedicated marketing support, including planning, promotion, and social media amplification. The Developer Relations team consisting of Magic Leap’s subject matter experts and QA testers will give developers one on one guidance. Read Next Magic Leap acquires Computes Inc to enhance spatial computing Magic Leap unveils Mica, a human-like AI in augmented reality Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality
MONTREAL – To reward its top sales travel professionals for all their hard work, Transat Distribution hosted a “well-deserved vacation” last month in the heart of Cancun.Award-winning travel professionals spent a “dream week” seeing all that the Cancun area has to offer. Transat’s leadership team treated the group to an appreciation supper at the Excellence Riviera Cancun, took them to an acrobatic show, and set sail with them on a private catamaran in Isla Mujeres. During the Bravo Awards Gala night hosted by Luxury Retreat, agents were greeted at the Hacienda Magica in Puerto Aventuras and attended Cirque du Soleil’s spectacular Joya performance.“Every year it gives us great pleasure to recognize the exceptional contribution of these travel professionals. They are always ready to give their best, offering the maximum of their time and energy, to meet the needs of their customers,” said Nathalie Boyer, General Manager at Transat Distribution Canada. “To acknowledge their work by having them join the BRAVO Excellence Club is, for us, proof of their importance.”Ontario Bravo WinnersQuebec Bravo WinnersWest Bravo WinnersDanielle Durocher, Rita Polegri, Nathalie Boyer, and Kimberley WoodAmr Younes, VP, Revenue Optimization, Luxury Retreats, and Nathalie Boyer, GM, Transat Distribution Canada Tags: Cancun, Transat Monday, May 28, 2018 Travelweek Group Share Posted by Transat Distribution hosts top agents on dream vacation in Cancun << Previous PostNext Post >>