markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train model...
url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Rep...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labeled A through J.
num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filenam...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Problem 1 Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
# Solution for Problem 1 import random print('Displaying images of train folders') # Looping through train folders and displaying a random image of each folder for path in train_folders: image_file = os.path.join(path, random.choice(os.listdir(path))) display(Image(filename=image_file)) print('Displaying image...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size. We'll convert the ...
image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dt...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Problem 2 Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
# Solution for Problem 2 def show_first_image(datasets): for pickl in datasets: print('Showing a first image from pickle ', pickl) try: with open(pickl, 'rb') as f: letter_set = pickle.load(f) plt.imshow(letter_set[0]) except Exception as e: ...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Problem 3 Another check: we expect the data to be balanced across classes. Verify that.
def show_dataset_shape(datasets): for pickl in datasets: try: with open(pickl, 'rb') as f: letter_set = pickle.load(f) print('Shape of pickle ', pickl, 'is', np.shape(letter_set)) except Exception as e: print('Unable to show image from pickle '...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9. Also create a validation dataset for hyperparameter tuning.
def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomi...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Problem 4 Convince yourself that the data is still good after shuffling!
print('Printing Train, validation and test labels after shuffling') def print_first_10_labels(labels): printing_labels = [] for i in range(10): printing_labels.append(labels[[i]]) print(printing_labels) print_first_10_labels(train_labels) print_first_10_labels(test_labels) print_first_10_labels(vali...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Finally, let's save the data for later reuse:
pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pi...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Problem 5 By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok ...
logreg_model_clf = LogisticRegression() nsamples, nx, ny = train_dataset.shape d2_train_dataset = train_dataset.reshape((nsamples,nx*ny)) logreg_model_clf.fit(d2_train_dataset, train_labels) from sklearn.metrics import accuracy_score nsamples, nx, ny = valid_dataset.shape d2_valid_dataset = valid_dataset.reshape((nsamp...
machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb
pk-ai/training
mit
Now the Hotels
url = 'http://www.bringfido.com/lodging/city/new_haven_ct_us' r = Render(url) result = r.frame.toHtml() #QString should be converted to string before processed by lxml formatted_result = str(result.toAscii()) tree = html.fromstring(formatted_result) #Now using correct Xpath we are fetching URL of archives archiv...
code/.ipynb_checkpoints/bf_qt_scraping-checkpoint.ipynb
mattgiguere/doglodge
mit
Now Get the Links
links = [] for lnk in archive_links: print(lnk.xpath('div/h1/a/@href')[0]) links.append(lnk.xpath('div/h1/a/@href')[0]) print('*'*25) lnk.xpath('//*/div/h1/a/@href')[0] links
code/.ipynb_checkpoints/bf_qt_scraping-checkpoint.ipynb
mattgiguere/doglodge
mit
Loading Reviews Next, we want to step through each page, and scrape the reviews for each hotel.
url_base = 'http://www.bringfido.com' r.update_url(url_base+links[0]) result = r.frame.toHtml() #QString should be converted to string before processed by lxml formatted_result = str(result.toAscii()) tree = html.fromstring(formatted_result) hotel_description = tree.xpath('//*[@class="body"]/text()') details = ...
code/.ipynb_checkpoints/bf_qt_scraping-checkpoint.ipynb
mattgiguere/doglodge
mit
Load software and filenames definitions
from fretbursts import * init_notebook() from IPython.display import display
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Data folder:
data_dir = './data/singlespot/' import os data_dir = os.path.abspath(data_dir) + '/' assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
List of data files:
from glob import glob file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f) ## Selection for POLIMI 2012-11-26 datatset labels = ['17d', '27d', '7d', '12d', '22d'] files_dict = {lab: fname for lab, fname in zip(labels, file_list)} files_dict data_id
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Data load Initial loading of the data:
d = loader.photon_hdf5(filename=files_dict[data_id])
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Load the leakage coefficient from disk:
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv' leakage = np.loadtxt(leakage_coeff_fname) print('Leakage coefficient:', leakage)
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Load the direct excitation coefficient ($d_{exAA}$) from disk:
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv' dir_ex_aa = np.loadtxt(dir_ex_coeff_fname) print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Load the gamma-factor ($\gamma$) from disk:
gamma_fname = 'results/usALEX - gamma factor - all-ph.csv' gamma = np.loadtxt(gamma_fname) print('Gamma-factor:', gamma)
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Update d with the correction coefficients:
d.leakage = leakage d.dir_ex = dir_ex_aa d.gamma = gamma
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Laser alternation selection At this point we have only the timestamps and the detector numbers:
d.ph_times_t[0][:3], d.ph_times_t[0][-3:]#, d.det_t print('First and last timestamps: {:10,} {:10,}'.format(d.ph_times_t[0][0], d.ph_times_t[0][-1])) print('Total number of timestamps: {:10,}'.format(d.ph_times_t[0].size))
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
We should check if everithing is OK with an alternation histogram:
plot_alternation_hist(d)
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
If the plot looks good we can apply the parameters with:
loader.alex_apply_period(d) print('D+A photons in D-excitation period: {:10,}'.format(d.D_ex[0].sum())) print('D+A photons in A-excitation period: {:10,}'.format(d.A_ex[0].sum()))
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Measurements infos All the measurement data is in the d variable. We can print it:
d
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Or check the measurements duration:
d.time_max
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Compute background Compute the background using automatic threshold:
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7) dplot(d, timetrace_bg) d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Burst search and selection
d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all')) print(d.ph_sel) dplot(d, hist_fret); # if data_id in ['7d', '27d']: # ds = d.select_bursts(select_bursts.size, th1=20) # else: # ds = d.select_bursts(select_bursts.size, th1=30) ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30) n_bursts_all...
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Donor Leakage fit
bandwidth = 0.03 E_range_do = (-0.1, 0.15) E_ax = np.r_[-0.2:0.401:0.0002] E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size', x_range=E_range_do, x_ax=E_ax, save_fitter=True) mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E...
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Burst sizes
nt_th1 = 50 dplot(ds_fret, hist_size, which='all', add_naa=False) xlim(-0, 250) plt.axvline(nt_th1) Th_nt = np.arange(35, 120) nt_th = np.zeros(Th_nt.size) for i, th in enumerate(Th_nt): ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th) nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th plt.figure()...
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Fret fit Max position of the Kernel Density Estimation (KDE):
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size') E_fitter = ds_fret.E_fitter E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5)) E_fitter.fit_res[0].params.pretty_print() fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plo...
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Weighted mean of $E$ of each burst:
ds_fret.fit_E_m(weights='size')
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Gaussian fit (no weights):
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Gaussian fit (using burst size as weights):
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size') E_kde_w = E_fitter.kde_max_pos[0] E_gauss_w = E_fitter.params.loc[0, 'center'] E_gauss_w_sig = E_fitter.params.loc[0, 'sigma'] E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0])) E_gauss_w_fiterr = E_fitter....
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Stoichiometry fit Max position of the Kernel Density Estimation (KDE):
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True) S_fitter = ds_fret.S_fitter S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5) fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(S_fitter...
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
The Maximum likelihood fit for a Gaussian population is the mean:
S = ds_fret.S[0] S_ml_fit = (S.mean(), S.std()) S_ml_fit
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Computing the weighted mean and weighted standard deviation we get:
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.) S_mean = np.dot(weights, S)/weights.sum() S_std_dev = np.sqrt( np.dot(weights, (S - S_mean)**2)/weights.sum()) S_wmean_fit = [S_mean, S_std_dev] S_wmean_fit
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Save data to file
sample = data_id
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret ' 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr ' 'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr ' 'E_pr_do_kde nt_mean\n')
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
This is just a trick to format the different variables:
variables_csv = variables.replace(' ', ',') fmt_float = '{%s:.6f}' fmt_int = '{%s:d}' fmt_str = '{%s}' fmt_dict = {**{'sample': fmt_str}, **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}} var_dict = {name: eval(name) for name in variables.split()} var_fmt = ', '.join([fmt_dict.get(name...
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb
tritemio/multispot_paper
mit
Data folder:
data_dir = './data/singlespot/'
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Check that the folder exists:
import os data_dir = os.path.abspath(data_dir) + '/' assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
List of data files in data_dir:
from glob import glob file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f) file_list ## Selection for POLIMI 2012-12-6 dataset # file_list.pop(2) # file_list = file_list[1:-2] # display(file_list) # labels = ['22d', '27d', '17d', '12d', '7d'] ## Selection for P.E. 2012-12-6 dataset # file_list...
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Laser alternation selection At this point we have only the timestamps and the detector numbers:
d.ph_times_t, d.det_t
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
If the plot looks good we can apply the parameters with:
loader.alex_apply_period(d)
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Burst search and selection
from mpl_toolkits.axes_grid1 import AxesGrid import lmfit print('lmfit version:', lmfit.__version__) assert d.dir_ex == 0 assert d.leakage == 0 d.burst_search(m=10, F=6, ph_sel=ph_sel) print(d.ph_sel, d.num_bursts) ds_sa = d.select_bursts(select_bursts.naa, th1=30) ds_sa.num_bursts
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Preliminary selection and plots
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30 ds_saw = d.select_bursts_mask_apply([mask]) ds_sas0 = ds_sa.select_bursts(select_bursts.S, S2=0.10) ds_sas = ds_sa.select_bursts(select_bursts.S, S2=0.15) ds_sas2 = ds_sa.select_bursts(select_bursts.S, S2=0.20) ds_sas3 = ds_sa.select_bursts(select_bursts.S, S2=0.25) ...
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
A-direct excitation fitting To extract the A-direct excitation coefficient we need to fit the S values for the A-only population. The S value for the A-only population is fitted with different methods: - Histogram git with 2 Gaussians or with 2 asymmetric Gaussians (an asymmetric Gaussian has right- and left-side of ...
dx = ds_sa bin_width = 0.03 bandwidth = 0.03 bins = np.r_[-0.2 : 1 : bin_width] x_kde = np.arange(bins.min(), bins.max(), 0.0002) ## Weights weights = None ## Histogram fit fitter_g = mfit.MultiFitter(dx.S) fitter_g.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth]) fitter_g.fit_histogram(model = mfit.factory_two_gaussia...
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Zero threshold on nd Select bursts with: $$n_d < 0$$.
dx = ds_sa.select_bursts(select_bursts.nd, th1=-100, th2=0) fitter = bext.bursts_fitter(dx, 'S') fitter.fit_histogram(model = mfit.factory_gaussian(center=0.1)) S_1peaks_th = fitter.params.loc[0, 'center'] dir_ex_S1p = S_1peaks_th/(1 - S_1peaks_th) print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S1p) mfi...
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Selection 1 Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a Gaussian fitted to the $S$ histogram of the FRET population.
dx = ds_sa ## Weights weights = 1 - mfit.gaussian(dx.S[0], fitter_g.params.loc[0, 'p2_center'], fitter_g.params.loc[0, 'p2_sigma']) weights[dx.S[0] >= fitter_g.params.loc[0, 'p2_center']] = 0 ## Histogram fit fitter_w1 = mfit.MultiFitter(dx.S) fitter_w1.weights = [weights] fitter_w1.histogram(bins=np.r_[-0.2 : 1.2 : ...
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Selection 2 Bursts are here weighted using weights $w$: $$w = n_{aa} - |n_a + n_d|$$
## Weights sizes = dx.nd[0] + dx.na[0] #- dir_ex_S_kde_w3*dx.naa[0] weights = dx.naa[0] - abs(sizes) weights[weights < 0] = 0 ## Histogram fitter_w4 = mfit.MultiFitter(dx.S) fitter_w4.weights = [weights] fitter_w4.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth]) fitter_w4.fit_histogram(model = mfit.factory_two_gaussians(...
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Selection 3 Bursts are here selected according to: $$n_{aa} - |n_a + n_d| > 30$$
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30 ds_saw = d.select_bursts_mask_apply([mask]) print(ds_saw.num_bursts) dx = ds_saw ## Weights weights = None ## 2-Gaussians fitter_w5 = mfit.MultiFitter(dx.S) fitter_w5.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth]) fitter_w5.fit_histogram(model = mfit.factory_two_gaus...
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
Save data to file
sample = data_id n_bursts_aa = ds_sas.num_bursts[0]
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
variables = ('sample n_bursts_aa dir_ex_S1p dir_ex_S_kde dir_ex_S2p dir_ex_S2pa ' 'dir_ex_S2p_w1 dir_ex_S_kde_w1 dir_ex_S_kde_w4 dir_ex_S_kde_w5 dir_ex_S2p_w5 dir_ex_S2p_w5a ' 'S_2peaks_w5 S_2peaks_w5_fiterr\n')
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
This is just a trick to format the different variables:
variables_csv = variables.replace(' ', ',') fmt_float = '{%s:.6f}' fmt_int = '{%s:d}' fmt_str = '{%s}' fmt_dict = {**{'sample': fmt_str}, **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}} var_dict = {name: eval(name) for name in variables.split()} var_fmt = ', '.join([fmt_dict.get(name...
out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb
tritemio/multispot_paper
mit
1. Get Arxiv data about machine learning Write a APi querier and extract papers with the terms machine learning or artificial intelligence. Get 2000 results... and play nice!
class Arxiv_querier(): ''' This class takes as an input a query and the number of results, and returns all the parsed results. Includes routines to deal with multiple pages of results. ''' def __init__(self,base_url="http://export.arxiv.org/api/query?"): ''' Initialise ...
notebooks/ml_topic_analysis_exploration.ipynb
Juan-Mateos/coll_int_ai_case
mit
2. Some exploratory analysis
from nltk.corpus import stopwords from nltk.tokenize import word_tokenize, sent_tokenize, RegexpTokenizer, PunktSentenceTokenizer from nltk.stem import WordNetLemmatizer, SnowballStemmer, PorterStemmer import scipy import ast import string as st from bs4 import BeautifulSoup import gensim from gensim.models.coherencem...
notebooks/ml_topic_analysis_exploration.ipynb
Juan-Mateos/coll_int_ai_case
mit
See <a href='https://arxiv.org/help/api/user-manual'>here</a> for abbreviations of categories. In a nutshell, AI is AI, LG is 'Learning', CV is 'Computer Vision', 'CL' is 'computation and language' and NE is 'Neural and Evolutionary computing'. SL.ML is kind of self-explanatory. We seem to be picking up the main things
#NB do we want to remove hyphens? punct = re.sub('-','',st.punctuation) def comp_sentence(sentence): ''' Takes a sentence and pre-processes it. The output is the sentence as a bag of words ''' #Remove line breaks and hyphens sentence = re.sub('\n',' ',sentence) sentence = re.sub('-',' ...
notebooks/ml_topic_analysis_exploration.ipynb
Juan-Mateos/coll_int_ai_case
mit
Lots of the rare words seem to be typos and so forth. We remove them
#Removing rare words clean_corpus_no_rare = [[x for x in el if x not in rare_words] for el in clean_corpus]
notebooks/ml_topic_analysis_exploration.ipynb
Juan-Mateos/coll_int_ai_case
mit
2 NLP (topic modelling & word embeddings)
#Identify 2-grams (frequent in science!) bigram_transformer = gensim.models.Phrases(clean_corpus_no_rare) #Train the model on the corpus #Let's do a bit of grid search #model = gensim.models.Word2Vec(bigram_transformer[clean_corpus], size=360, window=15, min_count=2, iter=20) model.most_similar('ai_safety') model....
notebooks/ml_topic_analysis_exploration.ipynb
Juan-Mateos/coll_int_ai_case
mit
Some of this is interesting. Doesn't seem to be picking up the policy related terms (safety, discrimination) Next stages - focus on policy related terms. Can we look for papers in keyword dictionaries identified through the word embeddings? Obtain Google Scholar data
#How many authors are there in the data? Can we collect all their institutions from Google Scholar paper_authors = pd.Series([x for el in all_papers['authors'] for x in el.split(", ")]) paper_authors_unique = paper_authors.drop_duplicates() len(paper_authors_unique)
notebooks/ml_topic_analysis_exploration.ipynb
Juan-Mateos/coll_int_ai_case
mit
We have 68,000 authors. It might take a while to get their data from Google Scholar
#Top authors and frequencies authors_freq = paper_authors.value_counts() fig,ax=plt.subplots(figsize=(10,3)) ax.hist(authors_freq,bins=30) ax.set_title('Distribution of publications') #Pretty skewed distribution! print(authors_freq.describe()) np.sum(authors_freq>2)
notebooks/ml_topic_analysis_exploration.ipynb
Juan-Mateos/coll_int_ai_case
mit
Less than 10,000 authors with 3+ papers in the data
get_scholar_data( %%time #Test run import scholarly @ratelim.patient(max_calls=30,time_interval=60) def get_scholar_data(scholarly_object): '''''' try: scholarly_object = next(scholarly_object) metadata = {} metadata['name']=scholarly_object.name metadata['affiliation'] = s...
notebooks/ml_topic_analysis_exploration.ipynb
Juan-Mateos/coll_int_ai_case
mit
1. General Mixture Models pomegranate has a very efficient implementation of mixture models, particularly Gaussian mixture models. Lets take a look at how fast pomegranate is versus sklearn, and then see how much faster parallelization can get it to be.
n, d, k = 1000000, 5, 3 X, y = create_dataset(n, d, k) print "sklearn GMM" %timeit GaussianMixture(n_components=k, covariance_type='full', max_iter=15, tol=1e-10).fit(X) print print "pomegranate GMM" %timeit GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=15, stop_threshold=1e-...
tutorials/old/Tutorial_7_Parallelization.ipynb
jmschrei/pomegranate
mit
It looks like on a large dataset not only is pomegranate faster than sklearn at performing 15 iterations of EM on 3 million 5 dimensional datapoints with 3 clusters, but the parallelization is able to help in speeding things up. Lets now take a look at the time it takes to make predictions using GMMs. Lets fit the mod...
d, k = 25, 2 X, y = create_dataset(1000, d, k) a = GaussianMixture(k, n_init=1, max_iter=25).fit(X) b = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=25) del X, y n = 1000000 X, y = create_dataset(n, d, k) print "sklearn GMM" %timeit -n 1 a.predict_proba(X) print print "pomeg...
tutorials/old/Tutorial_7_Parallelization.ipynb
jmschrei/pomegranate
mit
It looks like pomegranate can be slightly slower than sklearn when using a single processor, but that it can be parallelized to get faster performance. At the same time, predictions at this level happen so quickly (millions per second) that this may not be the most reliable test for parallelization. To ensure that we'r...
print (b.predict_proba(X) - b.predict_proba(X, n_jobs=4)).sum()
tutorials/old/Tutorial_7_Parallelization.ipynb
jmschrei/pomegranate
mit
Great, no difference between the two. Lets now make sure that pomegranate and sklearn are learning basically the same thing. Lets fit both models to some 2 dimensional 2 component data and make sure that they both extract the underlying clusters by plotting them.
d, k = 2, 2 X, y = create_dataset(1000, d, k, alpha=2) a = GaussianMixture(k, n_init=1, max_iter=25).fit(X) b = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=25) y1, y2 = a.predict(X), b.predict(X) plt.figure(figsize=(16,6)) plt.subplot(121) plt.title("sklearn clusters", font...
tutorials/old/Tutorial_7_Parallelization.ipynb
jmschrei/pomegranate
mit
It looks like we're getting the same basic results for the two. The two algorithms are initialized a bit differently, and so it can be difficult to directly compare the results between them, but it looks like they're getting roughly the same results. 3. Multivariate Gaussian HMM Now let's move on to training a hidden M...
X = numpy.random.randn(1000, 500, 50) print "pomegranate Gaussian HMM (1 job)" %timeit -n 1 -r 1 HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=5) print print "pomegranate Gaussian HMM (2 jobs)" %timeit -n 1 -r 1 HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=5, n_jobs...
tutorials/old/Tutorial_7_Parallelization.ipynb
jmschrei/pomegranate
mit
All we had to do was pass in the n_jobs parameter to the fit function in order to get a speed improvement. It looks like we're getting a really good speed improvement, as well! This is mostly because the HMM algorithms perform a lot more operations than the other models, and so spend the vast majority of time with the ...
model = HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=2, verbose=False) print "pomegranate Gaussian HMM (1 job)" %timeit predict_proba(model, X) print print "pomegranate Gaussian HMM (2 jobs)" %timeit predict_proba(model, X, n_jobs=2)
tutorials/old/Tutorial_7_Parallelization.ipynb
jmschrei/pomegranate
mit
Great, we're getting a really good speedup on that as well! Looks like the parallel processing is more efficient with a bigger, more complex model, than with a simple one. This can make sense, because all inference/training is more complex, and so there is more time with the GIL released compared to with the simpler op...
def create_model(mus): n = mus.shape[0] starts = numpy.zeros(n) starts[0] = 1. ends = numpy.zeros(n) ends[-1] = 0.5 transition_matrix = numpy.zeros((n, n)) distributions = [] for i in range(n): transition_matrix[i, i] = 0.5 if i < n - 1: ...
tutorials/old/Tutorial_7_Parallelization.ipynb
jmschrei/pomegranate
mit
Looks like we're getting a really nice speed improvement when training this complex model. Let's take a look now at the time it takes to do inference with it.
model = create_mixture(mus) print "pomegranate Mixture of Gaussian HMMs (1 job)" %timeit model.predict_proba(X) print model = create_mixture(mus) print "pomegranate Mixture of Gaussian HMMs (2 jobs)" %timeit model.predict_proba(X, n_jobs=2)
tutorials/old/Tutorial_7_Parallelization.ipynb
jmschrei/pomegranate
mit
The inner product of blades in GAlgebra is zero if either operand is a scalar: $$\begin{split}\begin{aligned} {\boldsymbol{A}}{r}{\wedge}{\boldsymbol{B}}{s} &\equiv {\left <{{\boldsymbol{A}}{r}{\boldsymbol{B}}{s}} \right >{r+s}} \ {\boldsymbol{A}}{r}\cdot{\boldsymbol{B}}{s} &\equiv {\left { { \begin{array}{...
c|a a|c c|A A|c
examples/ipython/inner_product.ipynb
arsenovic/galgebra
bsd-3-clause
$ab=a \wedge b + a \cdot b$ holds for vectors:
a*b a^b a|b (a*b)-(a^b)-(a|b)
examples/ipython/inner_product.ipynb
arsenovic/galgebra
bsd-3-clause
$aA=a \wedge A + a \cdot A$ holds for the products between vectors and multivectors:
a*A a^A a|A (a*A)-(a^A)-(a|A)
examples/ipython/inner_product.ipynb
arsenovic/galgebra
bsd-3-clause
$AB=A \wedge B + A \cdot B$ does NOT hold for the products between multivectors and multivectors:
A*B A|B (A*B)-(A^B)-(A|B) (A<B)+(A|B)+(A>B)-A*B
examples/ipython/inner_product.ipynb
arsenovic/galgebra
bsd-3-clause
Toolkit: Visualization Functions This class will introduce 3 different visualizations that can be used with the two different classification type neural networks and regression neural networks. Confusion Matrix - For any type of classification neural network. ROC Curve - For binary classification. Lift Curve - For reg...
%matplotlib inline import matplotlib.pyplot as plt from sklearn.metrics import roc_curve, auc # Plot a confusion matrix. # cm is the confusion matrix, names are the names of the classes. def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=...
t81_558_class4_class_reg.ipynb
jbliss1234/ML
apache-2.0
Binary Classification Binary classification is used to create a model that classifies between only two classes. These two classes are often called "positive" and "negative". Consider the following program that uses the wcbreast_wdbc dataset to classify if a breast tumor is cancerous (malignant) or not (benign). The ...
import os import pandas as pd from sklearn.cross_validation import train_test_split import tensorflow.contrib.learn as skflow import numpy as np from sklearn import metrics path = "./data/" filename = os.path.join(path,"wcbreast_wdbc.csv") df = pd.read_csv(filename,na_values=['NA','?']) # Encode feature vect...
t81_558_class4_class_reg.ipynb
jbliss1234/ML
apache-2.0
Confusion Matrix The confusion matrix is a common visualization for both binary and larger classification problems. Often a model will have difficulty differentiating between two classes. For example, a neural network might be really good at telling the difference between cats and dogs, but not so good at telling the...
import numpy as np from sklearn import svm, datasets from sklearn.cross_validation import train_test_split from sklearn.metrics import confusion_matrix pred = classifier.predict(x_test) # Compute confusion matrix cm = confusion_matrix(y_test, pred) np.set_printoptions(precision=2) print('Confusion matrix, withou...
t81_558_class4_class_reg.ipynb
jbliss1234/ML
apache-2.0
The above two confusion matrixes show the same network. The bottom (normalized) is the type you will normally see. Notice the two labels. The label "B" means benign (no cancer) and the label "M" means malignant (cancer). The left-right (x) axis are the predictions, the top-bottom) are the expected outcomes. A perf...
pred = classifier.predict_proba(x_test) pred = pred[:,1] # Only positive cases # print(pred[:,1]) plot_roc(pred,y_test)
t81_558_class4_class_reg.ipynb
jbliss1234/ML
apache-2.0
Classification We've already seen multi-class classification, with the iris dataset. Confusion matrixes work just fine with 3 classes. The following code generates a confusion matrix for iris.
import os import pandas as pd from sklearn.cross_validation import train_test_split import tensorflow.contrib.learn as skflow import numpy as np path = "./data/" filename = os.path.join(path,"iris.csv") df = pd.read_csv(filename,na_values=['NA','?']) # Encode feature vector encode_numeric_zscore(df,'petal_w'...
t81_558_class4_class_reg.ipynb
jbliss1234/ML
apache-2.0
See the strong diagonal? Iris is easy. See the light blue near the bottom? Sometimes virginica is confused for versicolor. Regression We've already seen regression with the MPG dataset. Regression uses its own set of visualizations, one of the most common is the lift chart. The following code generates a lift char...
import tensorflow.contrib.learn as skflow import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) # create feature vector missing_median(df, 'hor...
t81_558_class4_class_reg.ipynb
jbliss1234/ML
apache-2.0
Reordering the Callendar-Van Duzen equation we obtain the following $$ AT+BT^2+C(T-100)T^3 =\frac{R(T)}{R_0}-1 \enspace,$$ which we can write in matrix form as $Mx=p$, where $$\begin{bmatrix} T_1 & T_1^2 & (T_1-100)T_1^3 \ T_2 & T_2^2 & (T_2-100)T_2^3 \ T_3 & T_3^2 & (T_3-100)T_3^3\end{bmatrix} \begin{bmatrix} A\ B \ ...
R0=25; M=np.array([[T_exp[0],(T_exp[0])**2,(T_exp[0]-100)*(T_exp[0])**3],[T_exp[1],(T_exp[1])**2,(T_exp[1]-100)*(T_exp[1])**3],[T_exp[2],(T_exp[2])**2,(T_exp[2]-100)*(T_exp[2])**3]]); p=np.array([[(R_exp[0]/R0)-1],[(R_exp[1]/R0)-1],[(R_exp[2]/R0)-1]]); x = np.linalg.solve(M,p) #solve linear equations system np.set_pri...
notebooks/Ex_2_3.ipynb
agmarrugo/sensors-actuators
mit
We have found the coeffiecients $A$, $B$, and $C$ necessary to describe the sensor's transfer function. Now we plot it from -200 C a 600 C.
A=x[0];B=x[1];C=x[2]; T_range= np.arange(start = -200, stop = 601, step = 1); R_funT= R0*(1+A[0]*T_range+B[0]*(T_range)**2+C[0]*(T_range-100)*(T_range)**3); plt.plot(T_range,R_funT,T_exp[0],R_exp[0],'ro',T_exp[1],R_exp[1],'ro',T_exp[2],R_exp[2],'ro'); plt.ylabel('Sensor resistance [Ohm]') plt.xlabel('Temperature [C]') ...
notebooks/Ex_2_3.ipynb
agmarrugo/sensors-actuators
mit
Reddy Mikks model Given the following variables: $\begin{aligned} x_1 = \textrm{Tons of exterior paint produced daily} \newline x_2 = \textrm{Tons of interior paint produced daily} \end{aligned}$ and knowing that we want to maximize the profit, where \$5000 is the profit from exterior paint and \$4000 is the profit fro...
reddymikks = pywraplp.Solver('Reddy_Mikks', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING) x1 = reddymikks.NumVar(0, reddymikks.infinity(), 'x1') x2 = reddymikks.NumVar(0, reddymikks.infinity(), 'x2') reddymikks.Add(6*x1 + 4*x2 <= 24) reddymikks.Add(x1 + 2*x2 <= 6) reddymikks.Add(-x1 + x2 <= 1) reddymikks.Add(x2 <= 2) pro...
Linear Programming with OR-Tools.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
More simple problems A company that operates 10 hours a day manufactures two products on three sequential processes. The following data characterizes the problem:
import pandas as pd problemdata = pd.DataFrame({'Process 1': [10, 5], 'Process 2':[6, 20], 'Process 3':[8, 10], 'Unit profit':[20, 30]}) problemdata.index = ['Product 1', 'Product 2'] problemdata
Linear Programming with OR-Tools.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
Where there are 10 hours a day dedicated to production. Process times are given in minutes per unit while profit is given in USD. The optimal mix of the two products would be characterized by the following model: $\begin{aligned} x_1 = \textrm{Units of product 1} \newline x_2 = \textrm{Units of product 2} \end{aligned}...
simpleprod = pywraplp.Solver('Simple_Production', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING) x1 = simpleprod.NumVar(0, simpleprod.infinity(), 'x1') x2 = simpleprod.NumVar(0, simpleprod.infinity(), 'x2') for i in problemdata.columns[:-1]: simpleprod.Add(problemdata.loc[problemdata.index[0], i]*x1 + problemdata.loc[p...
Linear Programming with OR-Tools.ipynb
rayjustinhuang/DataAnalysisandMachineLearning
mit
1. Download Text8 Corpus
import os.path if not os.path.isfile('text8'): !wget -c http://mattmahoney.net/dc/text8.zip !unzip text8.zip
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
Import & Set up Logging I'm not going to set up logging due to the verbose input displaying in notebooks, but if you want that, uncomment the lines in the cell below.
LOGS = False if LOGS: import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
2. Build Word2Vec Model
from gensim.models import Word2Vec, KeyedVectors from gensim.models.word2vec import Text8Corpus # Using params from Word2Vec_FastText_Comparison params = { 'alpha': 0.05, 'size': 100, 'window': 5, 'iter': 5, 'min_count': 5, 'sample': 1e-4, 'sg': 1, 'hs': 0, 'negative': 5 } model =...
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
See the Word2Vec tutorial for how to initialize and save this model. Comparing the traditional implementation, Annoy and Nmslib approximation
# Set up the model and vector that we are using in the comparison from gensim.similarities.index import AnnoyIndexer from gensim.similarities.nmslib import NmslibIndexer model.init_sims() annoy_index = AnnoyIndexer(model, 300) nmslib_index = NmslibIndexer(model, {'M': 100, 'indexThreadQty': 1, 'efConstruction': 100}, ...
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
3. Construct Nmslib Index with model & make a similarity query Creating an indexer An instance of NmslibIndexer needs to be created in order to use Nmslib in gensim. The NmslibIndexer class is located in gensim.similarities.nmslib NmslibIndexer() takes three parameters: model: A Word2Vec or Doc2Vec model index_params: ...
# Building nmslib indexer nmslib_index = NmslibIndexer(model, {'M': 100, 'indexThreadQty': 1, 'efConstruction': 100}, {'efSearch': 10}) # Derive the vector for the word "science" in our model vector = model["science"] # The instance of AnnoyIndexer we just created is passed approximate_neighbors = model.most_similar([...
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
Analyzing the results The closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for "science". In this case the results are almostly same. 4. Verify & Evaluate performance Persisting Indexes You can save and load your indexes from/to disk to prevent having to...
import os fname = '/tmp/mymodel.index' # Persist index to disk nmslib_index.save(fname) # Load index back if os.path.exists(fname): nmslib_index2 = NmslibIndexer.load(fname) nmslib_index2.model = model # Results should be identical to above vector = model["science"] approximate_neighbors2 = model.most_simil...
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors. Save memory by memory-mapping indices saved to disk Nmslib library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes. Below are tw...
# Remove verbosity from code below (if logging active) if LOGS: logging.disable(logging.CRITICAL) from multiprocessing import Process import psutil
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
Bad Example: Two processes load the Word2vec model from disk and create there own Nmslib indices from that model.
%%time model.save('/tmp/mymodel.pkl') def f(process_id): print('Process Id: {}'.format(os.getpid())) process = psutil.Process(os.getpid()) new_model = Word2Vec.load('/tmp/mymodel.pkl') vector = new_model["science"] nmslib_index = NmslibIndexer(new_model, {'M': 100, 'indexThreadQty': 1, 'efConstruc...
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
Good example. Two processes load both the Word2vec model and index from disk and memory-map the index
%%time model.save('/tmp/mymodel.pkl') def f(process_id): print('Process Id: {}'.format(os.getpid())) process = psutil.Process(os.getpid()) new_model = Word2Vec.load('/tmp/mymodel.pkl') vector = new_model["science"] nmslib_index = NmslibIndexer.load('/tmp/mymodel.index') nmslib_index.model = ne...
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
5. Evaluate relationship of parameters to initialization/query time and accuracy, compared with annoy
import matplotlib.pyplot as plt %matplotlib inline
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
Build dataset of Initialization times and accuracy measures
exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)] # For calculating query time queries = 1000 def create_evaluation_graph(x_values, y_values_init, y_values_accuracy, y_values_query, param_name): plt.figure(1, figsize=(12, 6)) plt.subplot(231) plt.plot(x_value...
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1
6. Work with Google word2vec files Our model can be exported to a word2vec C format. There is a binary and a plain text word2vec format. Both can be read with a variety of other software, or imported back into gensim as a KeyedVectors object.
# To export our model as text model.wv.save_word2vec_format('/tmp/vectors.txt', binary=False) from smart_open import open # View the first 3 lines of the exported file # The first line has the total number of entries and the vector dimension count. # The next lines have a key (a string) followed by its vector. with ...
docs/notebooks/nmslibtutorial.ipynb
RaRe-Technologies/gensim
lgpl-2.1