ATL Python is complete

ATL Python is complete and performs as expected.
This commit is contained in:
Marcus Vinicius de Carvalho 2020-02-19 15:04:30 +08:00
parent 16a2e68086
commit 661c1c4d07
3 changed files with 19 additions and 5 deletions

11
ATL.py
View File

@ -382,6 +382,12 @@ def ATL(epochs: int = 1, n_batch: int = 1000, device='cpu'):
np.mean(metrics['classification_target_loss']), np.mean(metrics['classification_target_loss']),
np.min(metrics['classification_target_loss']), np.min(metrics['classification_target_loss']),
metrics['classification_target_loss'][-1])) metrics['classification_target_loss'][-1]))
print(('%s %s %s %s Reconstruction Source Loss:' + Fore.GREEN + ' %f' + Fore.YELLOW + ' %f' + Fore.RED + ' %f' + Fore.BLUE + ' %f' + Style.RESET_ALL) % (
string_max, string_mean, string_min, string_now,
np.max(metrics['reconstruction_source_loss']),
np.mean(metrics['reconstruction_source_loss']),
np.min(metrics['reconstruction_source_loss']),
metrics['reconstruction_source_loss'][-1]))
print(('%s %s %s %s Reconstruction Target Loss:' + Fore.GREEN + ' %f' + Fore.YELLOW + ' %f' + Fore.RED + ' %f' + Fore.BLUE + ' %f' + Style.RESET_ALL) % ( print(('%s %s %s %s Reconstruction Target Loss:' + Fore.GREEN + ' %f' + Fore.YELLOW + ' %f' + Fore.RED + ' %f' + Fore.BLUE + ' %f' + Style.RESET_ALL) % (
string_max, string_mean, string_min, string_now, string_max, string_mean, string_min, string_now,
np.max(metrics['reconstruction_target_loss']), np.max(metrics['reconstruction_target_loss']),
@ -452,6 +458,7 @@ def ATL(epochs: int = 1, n_batch: int = 1000, device='cpu'):
test(nn, Xs, ys, is_source=True, is_discriminative=True, metrics=metrics) test(nn, Xs, ys, is_source=True, is_discriminative=True, metrics=metrics)
test(ae, Xt, is_source=False, is_discriminative=False, metrics=metrics) test(ae, Xt, is_source=False, is_discriminative=False, metrics=metrics)
test(ae, Xs, is_source=True, is_discriminative=False, metrics=metrics)
metrics['train_time'].append(time.time()) metrics['train_time'].append(time.time())
for epoch in range(epochs): for epoch in range(epochs):
@ -492,6 +499,10 @@ def ATL(epochs: int = 1, n_batch: int = 1000, device='cpu'):
plot_time(metrics['train_time'], metrics['test_time']) plot_time(metrics['train_time'], metrics['test_time'])
plot_node_evolution(metrics['node_evolution']) plot_node_evolution(metrics['node_evolution'])
plot_classification_rates(metrics['classification_rate_source'], metrics['classification_rate_target']) plot_classification_rates(metrics['classification_rate_source'], metrics['classification_rate_target'])
plot_agmm(metrics['agmm_source_size_by_batch'], metrics['agmm_source_size_by_batch'])
plot_losses(metrics['classification_source_loss'], metrics['classification_target_loss'], metrics['reconstruction_source_loss'], metrics['reconstruction_target_loss'])
plot_generative_network_significance(nn.BIAS, nn.VAR)
plot_discriminative_network_significance(ae.BIAS, ae.VAR)
return result return result

View File

@ -128,10 +128,10 @@ class DataManipulator:
for x, data in zip(chunkify(self.X), chunkify(self.data)): for x, data in zip(chunkify(self.X), chunkify(self.data)):
x_mean = np.mean(x, axis=0) x_mean = np.mean(x, axis=0)
norm_1 = np.linalg.norm(x - x_mean) norm_1 = np.linalg.norm(x - x_mean, axis=0)
norm_2 = np.linalg.norm(x - x_mean, axis=1) norm_2 = np.linalg.norm(x - x_mean, axis=1)
numerator = norm_2 numerator = norm_2
denominator = 2 * (norm_1.std() ** 2) denominator = 2. * (norm_1.std() ** 2)
probability = np.exp(-numerator / denominator) probability = np.exp(-numerator / denominator)
idx = np.argsort(probability) idx = np.argsort(probability)

View File

@ -35,9 +35,7 @@ series = {CIKM 19}
If you want to see the original code used for this paper, access [ATL_Matlab](https://github.com/Ivsucram/ATL_Matlab) If you want to see the original code used for this paper, access [ATL_Matlab](https://github.com/Ivsucram/ATL_Matlab)
`ATL_Python` is a reconstruction of `ATL_Matlab` made by the same author, but using Python 3.6 and PyTorch (with autograd enabled and GPU support). The code is still not one-to-one and some differences in results can be found (specially on the data split methods in `DataManipulator`, however the network structure is correct and can be used by whoever is interested on this work in order to understand the structure or to build comparative results with your own research work. `ATL_Python` is a reconstruction of `ATL_Matlab` made by the same author, but using Python 3.6 and PyTorch (with autograd enabled and GPU support).
Having said that, expect `ATL_Python` to be updated in the following weeks, including functions refactoring and functions documentation.
# ATL_Python # ATL_Python
@ -77,6 +75,7 @@ ATL statues are printed at the end of every minibatch, where you will be able to
- Classification Rate for the Target (maximum, mean, minimum and current) - Classification Rate for the Target (maximum, mean, minimum and current)
- Classification Loss for the Source (maximum, mean, minimum and current) - Classification Loss for the Source (maximum, mean, minimum and current)
- Classification Loss for the Target (maximum, mean, minimum and current) - Classification Loss for the Target (maximum, mean, minimum and current)
- Reconstruction Loss for the Source (maximum, mean, minimum and current)
- Reconstruction Loss for the Target (maximum, mean, minimum and current) - Reconstruction Loss for the Target (maximum, mean, minimum and current)
- Kullback-Leibler Loss (maximum, mean, minimum and current) - Kullback-Leibler Loss (maximum, mean, minimum and current)
- Number of nodes (maximum, mean, minimum and current) - Number of nodes (maximum, mean, minimum and current)
@ -89,6 +88,10 @@ At the end of the process, ATL will plot 6 graphs:
- The processing time per mini-batch and the total processing time as well, both for training and testing - The processing time per mini-batch and the total processing time as well, both for training and testing
- The evolution of nodes over time - The evolution of nodes over time
- The target and source classification rate evolution, as well as the final mean accuracy of the network - The target and source classification rate evolution, as well as the final mean accuracy of the network
- The number of GMMs on Source AGMM and Target AGMM
- Losess for the source and target classification as well as source and target reconstruction
- Bias and Variance of the discriminative phase
- Bias and Variance of the generative phase
``` ```
Thank you. Thank you.